From c2bce9bf49571bc67ac7daf42b708d47592b83f8 Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 17 Jul 2023 10:15:34 -0400 Subject: [PATCH 001/103] intro trusted category --- website/docs/docs/supported-data-platforms.md | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md index a8ae33a7e0c..868ba98d67a 100644 --- a/website/docs/docs/supported-data-platforms.md +++ b/website/docs/docs/supported-data-platforms.md @@ -6,11 +6,16 @@ description: "Connect dbt to any data platform in dbt Cloud or dbt Core, using a hide_table_of_contents: true --- -dbt connects to and runs SQL against your database, warehouse, lake, or query engine. These SQL-speaking platforms are collectively referred to as _data platforms_. dbt connects with data platforms by using a dedicated adapter plugin for each. Plugins are built as Python modules that dbt Core discovers if they are installed on your system. Read [What are Adapters](/guides/dbt-ecosystem/adapter-development/1-what-are-adapters) for more info. +dbt connects to and runs SQL against your database, warehouse, lake, or query engine. These SQL-speaking platforms are collectively referred to as _data platforms_. dbt connects with data platforms by using a dedicated adapter plugin for each. Plugins are built as Python modules that dbt Core discovers if they are installed on your system. Read [What are Adapters](/guides/dbt-ecosystem/adapter-development/1-what-are-adapters) for more info. -You can [connect](/docs/connect-adapters) to adapters and data platforms either directly in the dbt Cloud user interface (UI) or install them manually using the command line (CLI). There are two types of adapters available and to evaluate quality and maintenance, we recommend you consider their verification status. You can also [further configure](/reference/resource-configs/postgres-configs) your specific data platform to optimize performance. +You can also [further configure](/reference/resource-configs/postgres-configs) your specific data platform to optimize performance. -- **Verified** — dbt Labs' strict [adapter program](/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter) assures users of trustworthy, tested, and regularly updated adapters for production use. Verified adapters earn a "Verified" status, providing users with trust and confidence. +## Types of Adapters + +You can [connect](/docs/connect-adapters) to adapters and data platforms either directly in the dbt Cloud user interface (UI) or install them manually using the command line (CLI). There are three types of adapters available today. The purpose of differentiation is to provide users with an easier means to evaluate adapter quality. + +- **Verified** — dbt Labs' strict [adapter program](/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter) assures users of trustworthy, tested, and regularly updated adapters for production use. Verified adapters earn a "Verified" status, providing users with trust and confidence. +- **Trusted** — Trusted adapters are those where the adapter maintainers have agreed to meet a higher standard of quality. - **Community** — [Community adapters](/docs/community-adapters) are open-source and maintained by community members. ### Verified adapters From a979eda60fc7da18552f8eef1e5148c16a27f761 Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 17 Jul 2023 12:22:28 -0400 Subject: [PATCH 002/103] content draft --- website/docs/docs/supported-data-platforms.md | 4 +- website/docs/docs/trusted-adapters.md | 59 +++++++++++++ .../8-trusting-a-new-adapter.md | 82 +++++++++++++++++++ website/sidebars.js | 2 + 4 files changed, 145 insertions(+), 2 deletions(-) create mode 100644 website/docs/docs/trusted-adapters.md create mode 100644 website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md index 868ba98d67a..0f150c50440 100644 --- a/website/docs/docs/supported-data-platforms.md +++ b/website/docs/docs/supported-data-platforms.md @@ -15,8 +15,8 @@ You can also [further configure](/reference/resource-configs/postgres-configs) y You can [connect](/docs/connect-adapters) to adapters and data platforms either directly in the dbt Cloud user interface (UI) or install them manually using the command line (CLI). There are three types of adapters available today. The purpose of differentiation is to provide users with an easier means to evaluate adapter quality. - **Verified** — dbt Labs' strict [adapter program](/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter) assures users of trustworthy, tested, and regularly updated adapters for production use. Verified adapters earn a "Verified" status, providing users with trust and confidence. -- **Trusted** — Trusted adapters are those where the adapter maintainers have agreed to meet a higher standard of quality. -- **Community** — [Community adapters](/docs/community-adapters) are open-source and maintained by community members. +- **Trusted** — [Trusted adapters](trusted-adapters) are those where the adapter maintainers have agreed to meet a higher standard of quality. +- **Community** — [Community adapters](community-adapters) are open-source and maintained by community members. ### Verified adapters diff --git a/website/docs/docs/trusted-adapters.md b/website/docs/docs/trusted-adapters.md new file mode 100644 index 00000000000..cd517ca1ac9 --- /dev/null +++ b/website/docs/docs/trusted-adapters.md @@ -0,0 +1,59 @@ +--- +title: "Trusted adapters" +id: "trusted-adapters" +--- + +Trusted adapters are adapters not maintained by dbt Labs, that we feel comfortable recommending to users for use in production. + +### to be toggle heading'd + +Free and open-source tools for the data professional are increasingly abundant. This is by-and-large a *good thing*, however it requires due dilligence that wasn't required in a paid-license, closed-source software world. Before taking a dependency on an open-source projet is is important to determine the answer to the following questions: + +1. Does it work? +2. Does anyone "own" the code, or is anyone liable for ensuring it works? +3. Do bugs get fixed quickly? +4. Does it stay up-to-date with new Core features? +5. Is the usage substantial enough to self-sustain? +6. What risks do I take on by taking a dependency on this library? + +### for adapter maintainers + +if you're an adapter maintainer interested in joining the trusted adapter program click [Building a Trusted Adapter](8-building-a-trusted-adapter). + +### Trusted vs Verified + +The Verification program (currently paused) exists to highlight adapters that meets both of the following criteria: + +- the guidelines given in the Trusted program, +- formal agreements required for integration with dbt Cloud + +For more information on the Verified Adapter program, reach out the [dbt Labs parnterships team](partnerships@dbtlabs.com) + + +### Trusted adapters + +The following are **Trusted adapters** βœ“ you can connect to in dbt Core: + +
+ + + + + + + + + +
diff --git a/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md b/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md new file mode 100644 index 00000000000..fbaabe83ea0 --- /dev/null +++ b/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md @@ -0,0 +1,82 @@ +--- +title: "Building a Trusted Adapter" +id: "8-building-a-trusted-adapter" +--- + +The Trusted adapter program exists to allow adapter maintainers to demonstrate to the dbt community that your adapter is trusted to be used in production. + +## What does it mean to be trusted + +Below are some categories with stuff. + +By opt-ing into the below, you agree to this, and we take you at your word. dbt Labs reserves the right to remove an adapter from the trusted adapter list at any time, should any of the below guidelines not be met. + +### Feature Completeness + +To be considered for the Trusted Adapter program, the adapter must cover the essential functionality of dbt Core given below, with best effort given to support the entire feature set. + +The adapter should have the required documentation for connecting and configuring the adapter. The dbt docs site should be the single source of truth for this information. These docs should be kept up-to-date. + +See [this guide](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter) for more information + +#### what is essential? + +tables, views, seeds, tests etc + +### Release Cadence + +Keeping an adapter up-to-date with dbt Core is an integral part of being a trusted adapter. Therefore, we ask that adapter maintainers: + +- Release of new minor versions of the adapter with all tests passing within four weeks of dbt Core's release cut. +- Release of new major versions of the adapter with all tests passing within eight weeks of dbt Core's release cut. + +### Community Responsiveness + +On a best effort basis, active participation and engagement with the dbt Community across the following forums: + +- Being responsive to feedback and supporting user enablement in dbt Community’s Slack workspace +- Responding with comments to issues raised in public dbt adapter code repository +- Merging in code contributions from community members as deemed appropriate + +### Security Practices + +Trusted adapters will not do any of the following: + +- Output to logs or file either access credentials information to or data from the underlying data platform itself. +- Make API calls other than those expressly required for using dbt features (adapters may not add additional logging) +- Obfuscate code and/or functionality so as to avoid detection +- Use the Python runtime of dbt to execute arbitrary Python code +- Draw a dependency on dbt’s Python API beyond what is required for core data transformation functionality as described in the Essential and Extended feature tiers + +Additionally, to avoid supply-chain attacks: + +- Use an automated service to keep Python dependencies up-to-date (such as Dependabot or similar), +- Publish directly to PyPI from the dbt adapter code repository by using trusted CI/CD process (such as GitHub actions) +- Restrict admin access to both the respective code (GitHub) and package (PyPI) repositories +- Identify and mitigate security vulnerabilities by use of a static code analyzing tool (such as Snyk) as part of a CI/CD process + +### Other considerations + +The adapter repository is: + +- open-souce licensed, +- published to PyPI, and +- automatically tests the codebase against dbt Lab's provided adapter test suite + +## How to get an adapter verified? + +To submit your adapter for consideration as a Trusted adapter, use the "trusted adapter" issue template on the docs.getdbt.com repository. This will prompt you to agree to the following checkboxes: + +1. my adapter meet the guidelines given above +2. I will make best reasonable effort that this continues to be so +3. checkbox: I acknowledge that dbt Labs reserves the right to remove an adapter from the trusted adapter list at any time, should any of the above guidelines not be met. + +The approval workflow is as follows: + +1. create and populate the template-created issue +2. dbt Labs will respond as quickly as possible (maximally four weeks, though likely faster) +3. If approved, dbt Labs will create and merge a Pull request to formally add the adapter to the list. + +## How to get help with my trusted adapter? + +Ask your question in #adapter-ecosystem channel of the community Slack. diff --git a/website/sidebars.js b/website/sidebars.js index e10ebd513c2..5ec41bf2d86 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -8,6 +8,7 @@ const sidebarSettings = { link: { type: "doc", id: "docs/supported-data-platforms" }, items: [ "docs/connect-adapters", + "docs/trusted-adapters", "docs/community-adapters", "docs/contribute-core-adapters", ], @@ -1033,6 +1034,7 @@ const sidebarSettings = { "guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter", "guides/dbt-ecosystem/adapter-development/6-promoting-a-new-adapter", "guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter", + "guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter", ], }, { From 504935a80c0be84b67e21423034451e176ec38c5 Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 17 Jul 2023 13:19:19 -0400 Subject: [PATCH 003/103] snippet card tables --- website/docs/docs/supported-data-platforms.md | 61 +++---------------- website/docs/docs/trusted-adapters.md | 24 +------- website/snippets/_adapters-trusted.md | 23 +++++++ website/snippets/_adapters-verified.md | 56 +++++++++++++++++ 4 files changed, 89 insertions(+), 75 deletions(-) create mode 100644 website/snippets/_adapters-trusted.md create mode 100644 website/snippets/_adapters-verified.md diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md index 0f150c50440..068fd6660fe 100644 --- a/website/docs/docs/supported-data-platforms.md +++ b/website/docs/docs/supported-data-platforms.md @@ -22,62 +22,17 @@ You can [connect](/docs/connect-adapters) to adapters and data platforms either The following are **Verified adapters** βœ“ you can connect to either in dbt Cloud or dbt Core: -
+import AdaptersVerified from '/snippets/_adapters-verified.md'; - - + - - - - - - - - - - - +
+* Install these adapters using the CLI as they're not currently supported in dbt Cloud.
- +### Trusted adapters - - +The following are **Trusted adapters** βœ“ you can connect to in dbt Core: -
+import AdaptersTrusted from '/snippets/_adapters-trusted.md'; -
-* Install these adapters using the CLI as they're not currently supported in dbt Cloud.
+ diff --git a/website/docs/docs/trusted-adapters.md b/website/docs/docs/trusted-adapters.md index cd517ca1ac9..86bed045f44 100644 --- a/website/docs/docs/trusted-adapters.md +++ b/website/docs/docs/trusted-adapters.md @@ -34,26 +34,6 @@ For more information on the Verified Adapter program, reach out the [dbt Labs pa The following are **Trusted adapters** βœ“ you can connect to in dbt Core: -
+import AdaptersTrusted from '/snippets/_adapters-trusted.md'; - - - - - - - - -
+ \ No newline at end of file diff --git a/website/snippets/_adapters-trusted.md b/website/snippets/_adapters-trusted.md new file mode 100644 index 00000000000..57e667e5198 --- /dev/null +++ b/website/snippets/_adapters-trusted.md @@ -0,0 +1,23 @@ +
+ + + + + + + + + +
diff --git a/website/snippets/_adapters-verified.md b/website/snippets/_adapters-verified.md new file mode 100644 index 00000000000..fe0bc394d09 --- /dev/null +++ b/website/snippets/_adapters-verified.md @@ -0,0 +1,56 @@ +
+ + + + + + + + + + + + + + + + + + + + + +
\ No newline at end of file From fe8eeb8575bc60f81e26dc9acf667fbcf5252696 Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 17 Jul 2023 13:24:23 -0400 Subject: [PATCH 004/103] add verified adapters --- website/docs/docs/supported-data-platforms.md | 3 --- website/docs/docs/verified-adapters.md | 21 ++++++------------- website/sidebars.js | 1 + website/snippets/_adapters-verified.md | 4 +++- 4 files changed, 10 insertions(+), 19 deletions(-) diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md index 068fd6660fe..bc2a5a119a5 100644 --- a/website/docs/docs/supported-data-platforms.md +++ b/website/docs/docs/supported-data-platforms.md @@ -26,9 +26,6 @@ import AdaptersVerified from '/snippets/_adapters-verified.md'; -
-* Install these adapters using the CLI as they're not currently supported in dbt Cloud.
- ### Trusted adapters The following are **Trusted adapters** βœ“ you can connect to in dbt Core: diff --git a/website/docs/docs/verified-adapters.md b/website/docs/docs/verified-adapters.md index 9604d05391c..3da1f2caf7b 100644 --- a/website/docs/docs/verified-adapters.md +++ b/website/docs/docs/verified-adapters.md @@ -8,23 +8,14 @@ The dbt Labs has a rigorous verified adapter program which provides reassurance These adapters then earn a "Verified" status so that users can have a certain level of trust and expectation when they use them. The adapters also have maintainers and we recommend using the adapter's verification status to determine its quality and health. +To learn more, see [Verifying a new adapter](/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter). + Here's the list of the verified data platforms that can connect to dbt and its latest version. -| dbt Cloud setup | CLI installation | latest verified version | -| ---------------- | ----------------------------------------- | ------------------------ | -| [Setup AlloyDB](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb) | [Install AlloyDB](/docs/core/connect-data-platform/alloydb-setup) | (same as `dbt-postgres`) | -| Not supported | [Install Azure Synapse](/docs/core/connect-data-platform/azuresynapse-setup) | 1.3 :construction: | -| [Set up BigQuery](/docs/cloud/connect-data-platform/connect-bigquery) | [Install BigQuery](/docs/core/connect-data-platform/bigquery-setup) | 1.4 | -| [Set up Databricks ](/docs/cloud/connect-data-platform/connect-databricks)| [ Install Databricks](/docs/core/connect-data-platform/databricks-setup) | 1.4 | -| Not supported | [Install Dremio](/docs/core/connect-data-platform/dremio-setup) | 1.4 :construction: | -| [Set up Postgres](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb) | [Install Postgres](/docs/core/connect-data-platform/postgres-setup) | 1.4 | -| [Set up Redshift](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb) | [Install Redshift](/docs/core/connect-data-platform/redshift-setup) | 1.4 | -| [Set up Snowflake](/docs/cloud/connect-data-platform/connect-snowflake) | [ Install Snowflake](/docs/core/connect-data-platform/snowflake-setup) | 1.4 | -| [Set up Spark](/docs/cloud/connect-data-platform/connect-apache-spark) | [Install Spark](/docs/core/connect-data-platform/spark-setup) | 1.4 | -| [Set up Starburst & Trino](/docs/cloud/connect-data-platform/connect-starburst-trino)| [Installl Starburst & Trino](/docs/core/connect-data-platform/trino-setup) | 1.4 | - -:construction:: Verification in progress +import AdaptersVerified from '/snippets/_adapters-verified.md'; + + + -To learn more, see [Verifying a new adapter](/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter). diff --git a/website/sidebars.js b/website/sidebars.js index 5ec41bf2d86..ed34edcd5b9 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -8,6 +8,7 @@ const sidebarSettings = { link: { type: "doc", id: "docs/supported-data-platforms" }, items: [ "docs/connect-adapters", + "docs/verified-adapters", "docs/trusted-adapters", "docs/community-adapters", "docs/contribute-core-adapters", diff --git a/website/snippets/_adapters-verified.md b/website/snippets/_adapters-verified.md index fe0bc394d09..70cc90070c0 100644 --- a/website/snippets/_adapters-verified.md +++ b/website/snippets/_adapters-verified.md @@ -53,4 +53,6 @@ body="Install using the CLI


🚧 Verification in progress" icon="rocket"/> - \ No newline at end of file + + +* Install these adapters using the CLI as they're not currently supported in dbt Cloud.
From 52a9f693cfc0cbfd5b125817e1c16f77569eb9ff Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 17 Jul 2023 13:33:25 -0400 Subject: [PATCH 005/103] not relevant --- website/docs/docs/connect-adapters.md | 20 +++---------------- website/docs/docs/verified-adapters.md | 6 +----- .../5-documenting-a-new-adapter.md | 7 +------ 3 files changed, 5 insertions(+), 28 deletions(-) diff --git a/website/docs/docs/connect-adapters.md b/website/docs/docs/connect-adapters.md index 5632fb3793e..f45da732abb 100644 --- a/website/docs/docs/connect-adapters.md +++ b/website/docs/docs/connect-adapters.md @@ -5,32 +5,18 @@ id: "connect-adapters" Adapters are an essential component of dbt. At their most basic level, they are how dbt connects with the various supported data platforms. At a higher-level, adapters strive to give analytics engineers more transferrable skills as well as standardize how analytics projects are structured. Gone are the days where you have to learn a new language or flavor of SQL when you move to a new job that has a different data platform. That is the power of adapters in dbt — for more detail, read the [What are adapters](/guides/dbt-ecosystem/adapter-development/1-what-are-adapters) guide. -This section provides more details on different ways you can connect dbt to an adapter, and explains what a maintainer is. +This section provides more details on different ways you can connect dbt to an adapter, and explains what a maintainer is. ### Set up in dbt Cloud -Explore the fastest and most reliable way to deploy dbt using dbt Cloud, a hosted architecture that runs dbt Core across your organization. dbt Cloud lets you seamlessly [connect](/docs/cloud/about-cloud-setup) with a variety of [verified](/docs/supported-data-platforms) data platform providers directly in the dbt Cloud UI. - -dbt Cloud supports data platforms that are verified and [maintained](#maintainers) by dbt Labs or partners. This level of support ensures that users can trust certain adapters for use in production. +Explore the fastest and most reliable way to deploy dbt using dbt Cloud, a hosted architecture that runs dbt Core across your organization. dbt Cloud lets you seamlessly [connect](/docs/cloud/about-cloud-setup) with a variety of [verified](/docs/supported-data-platforms) data platform providers directly in the dbt Cloud UI. ### Install using the CLI -Install dbt Core, which is an open-source tool, locally using the CLI. dbt communicates with a number of different data platforms by using a dedicated adapter plugin for each. When you install dbt Core, you'll also need to install the specific adapter for your database, [connect to dbt Core](/docs/core/about-core-setup), and set up a `profiles.yml` file. - -Data platforms supported in dbt Core may be verified or unverified, and are [maintained](#maintainers) by dbt Labs, partners, or community members. +Install dbt Core, which is an open-source tool, locally using the CLI. dbt communicates with a number of different data platforms by using a dedicated adapter plugin for each. When you install dbt Core, you'll also need to install the specific adapter for your database, [connect to dbt Core](/docs/core/about-core-setup), and set up a `profiles.yml` file. With a few exceptions [^1], you can install all [Verified adapters](/docs/supported-data-platforms) from PyPI using `pip install adapter-name`. For example to install Snowflake, use the command `pip install dbt-snowflake`. The installation will include `dbt-core` and any other required dependencies, which may include both other dependencies and even other adapter plugins. Read more about [installing dbt](/docs/core/installation). - -## Maintainers - -Who made and maintains an adapter is certainly relevant, but we recommend using an adapter's verification status to determine the quality and health of an adapter. So far there are three categories of maintainers: - -| Supported by | Maintained By | -| ------------ | ---------------- | -| dbt Labs | dbt Labs maintains a set of adapter plugins for some of the most common databases, warehouses, and platforms. As for why particular data platforms were chosen, see ["Why Verify an Adapter"](/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter#why-verify-an-adapter) | -| Partner | These adapter plugins are built and maintained by the same people who build and maintain the complementary data technology. | -| Community | These adapter plugins are contributed and maintained by members of the community. 🌱 | [^1]: Here are the two different adapters. Use the PyPI package name when installing with `pip` | Adapter repo name | PyPI package name | diff --git a/website/docs/docs/verified-adapters.md b/website/docs/docs/verified-adapters.md index 3da1f2caf7b..d5611c7062f 100644 --- a/website/docs/docs/verified-adapters.md +++ b/website/docs/docs/verified-adapters.md @@ -4,7 +4,7 @@ id: "verified-adapters" --- -The dbt Labs has a rigorous verified adapter program which provides reassurance to users about which adapters can be trusted to use in production, has been tested, and is actively maintained and updated. The process covers aspects of development, documentation, user experience, and maintenance. +The dbt Labs has a rigorous verified adapter program which provides reassurance to users about which adapters can be trusted to use in production, has been tested, and is actively maintained and updated. The process covers aspects of development, documentation, user experience, and maintenance. These adapters then earn a "Verified" status so that users can have a certain level of trust and expectation when they use them. The adapters also have maintainers and we recommend using the adapter's verification status to determine its quality and health. @@ -15,7 +15,3 @@ Here's the list of the verified data platforms that can connect to dbt and its l import AdaptersVerified from '/snippets/_adapters-verified.md'; - - - - diff --git a/website/docs/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter.md b/website/docs/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter.md index f8335dfcbc4..80b994aefb0 100644 --- a/website/docs/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter.md +++ b/website/docs/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter.md @@ -8,6 +8,7 @@ If you've already [built](3-building-a-new-adapter), and [tested](4-testing-a-ne ## Making your adapter available Many community members maintain their adapter plugins under open source licenses. If you're interested in doing this, we recommend: + - Hosting on a public git provider (for example, GitHub or Gitlab) - Publishing to [PyPI](https://pypi.org/) - Adding to the list of ["Supported Data Platforms"](/docs/supported-data-platforms#community-supported) (more info below) @@ -35,17 +36,12 @@ We ask our adapter maintainers to use the [docs.getdbt.com repo](https://github. To simplify things, assume the reader of this documentation already knows how both dbt and your data platform works. There's already great material for how to learn dbt and the data platform out there. The documentation we're asking you to add should be what a user who is already profiecient in both dbt and your data platform would need to know in order to use both. Effectively that boils down to two things: how to connect, and how to configure. - ## Topics and Pages to Cover - The following subjects need to be addressed across three pages of this docs site to have your data platform be listed on our documentation. After the corresponding pull request is merged, we ask that you link to these pages from your adapter repo's `REAMDE` as well as from your product documentation. To contribute, all you will have to do make the changes listed in the table below. - - - | How To... | File to change within `/website/docs/` | Action | Info to Include | |----------------------|--------------------------------------------------------------|--------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Connect | `/docs/core/connect-data-platform/{MY-DATA-PLATFORM}-setup.md` | Create | Give all information needed to define a target in `~/.dbt/profiles.yml` and get `dbt debug` to connect to the database successfully. All possible configurations should be mentioned. | @@ -55,7 +51,6 @@ The following subjects need to be addressed across three pages of this docs site For example say I want to document my new adapter: `dbt-ders`. For the "Connect" page, I will make a new Markdown file, `ders-setup.md` and add it to the `/website/docs/core/connect-data-platform/` directory. - ## Example PRs to add new adapter documentation Below are some recent pull requests made by partners to document their data platform's adapter: From 27091518195d8136005b6d7bcbf7593ef6f261a8 Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 17 Jul 2023 13:44:43 -0400 Subject: [PATCH 006/103] polish --- website/docs/docs/trusted-adapters.md | 17 +++++++++-------- .../8-trusting-a-new-adapter.md | 9 +++++---- 2 files changed, 14 insertions(+), 12 deletions(-) diff --git a/website/docs/docs/trusted-adapters.md b/website/docs/docs/trusted-adapters.md index 86bed045f44..8b304d01b0d 100644 --- a/website/docs/docs/trusted-adapters.md +++ b/website/docs/docs/trusted-adapters.md @@ -5,20 +5,22 @@ id: "trusted-adapters" Trusted adapters are adapters not maintained by dbt Labs, that we feel comfortable recommending to users for use in production. -### to be toggle heading'd +Free and open-source tools for the data professional are increasingly abundant. This is by-and-large a *good thing*, however it requires due dilligence that wasn't required in a paid-license, closed-source software world. As a user, there are questions to answer important before taking a dependency on an open-source project. The trusted adapter designation is meant to streamline this process for end users. -Free and open-source tools for the data professional are increasingly abundant. This is by-and-large a *good thing*, however it requires due dilligence that wasn't required in a paid-license, closed-source software world. Before taking a dependency on an open-source projet is is important to determine the answer to the following questions: +
Considerations for depending on an open-source project 1. Does it work? 2. Does anyone "own" the code, or is anyone liable for ensuring it works? 3. Do bugs get fixed quickly? 4. Does it stay up-to-date with new Core features? 5. Is the usage substantial enough to self-sustain? -6. What risks do I take on by taking a dependency on this library? +pendency on this library? -### for adapter maintainers +
-if you're an adapter maintainer interested in joining the trusted adapter program click [Building a Trusted Adapter](8-building-a-trusted-adapter). +### Trusted adapter specifications + +See [Building a Trusted Adapter](/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter) for more information, particularly if you are an adapter maintainer considering having your adapter be added to the trusted list. ### Trusted vs Verified @@ -27,8 +29,7 @@ The Verification program (currently paused) exists to highlight adapters that me - the guidelines given in the Trusted program, - formal agreements required for integration with dbt Cloud -For more information on the Verified Adapter program, reach out the [dbt Labs parnterships team](partnerships@dbtlabs.com) - +For more information on the Verified Adapter program, reach out the [dbt Labs partnerships team](mailto:partnerships@dbtlabs.com) ### Trusted adapters @@ -36,4 +37,4 @@ The following are **Trusted adapters** βœ“ you can connect to in dbt Core: import AdaptersTrusted from '/snippets/_adapters-trusted.md'; - \ No newline at end of file + diff --git a/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md b/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md index fbaabe83ea0..9e39e5a8897 100644 --- a/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md +++ b/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md @@ -15,13 +15,14 @@ By opt-ing into the below, you agree to this, and we take you at your word. dbt To be considered for the Trusted Adapter program, the adapter must cover the essential functionality of dbt Core given below, with best effort given to support the entire feature set. -The adapter should have the required documentation for connecting and configuring the adapter. The dbt docs site should be the single source of truth for this information. These docs should be kept up-to-date. +Essential functionality includes (but is not limited to the following features): -See [this guide](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter) for more information +- table, view, and seed materializations +- dbt tests -#### what is essential? +The adapter should have the required documentation for connecting and configuring the adapter. The dbt docs site should be the single source of truth for this information. These docs should be kept up-to-date. -tables, views, seeds, tests etc +See [Documenting a new adapter](/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter) for more information. ### Release Cadence From c6909d5acd869c8670f7bec446d6e0247771b1ce Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 17 Jul 2023 14:19:21 -0400 Subject: [PATCH 007/103] sample workflow --- .../add-adapter-to-trusted-list.yml | 51 +++++++++++++++++++ 1 file changed, 51 insertions(+) create mode 100644 .github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml diff --git a/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml b/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml new file mode 100644 index 00000000000..b9d45621620 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml @@ -0,0 +1,51 @@ +name: Add adapter to Trusted list +description: > + For adapter maintainers who wish to have theirs added to the list of [Trusted adapters](https://docs.getdbt.com/docs/trusted-adapters) +labels: ["adapter maintainers","developer blog"] +body: + - type: markdown + attributes: + value: | + We're excited that you'd like to support your adapter formally as "Trusted"! This template will ensure that you are aware of the process and the guidelines. Additionally, that you can vouch that your adapter currently what is expected of a Trusted adapter + + - type: input + id: adapter-repo + attributes: + label: Link to adapter repo + description: Please link to the GitHub repo + validations: + required: true + + - type: input + id: contact + attributes: + label: Contact Details + description: How can we get in touch with you? + placeholder: your preferred email and/or dbt Slack handle + validations: + required: true + + - type: checkboxes + id: author_type + attributes: + label: Which of these best describes you? + options: + - label: I am a dbt Community member or partner contributing to the Developer Blog + - label: I work for the vendor on top of which the dbt adapter functions + + - type: checkboxes + id: read-program-guide + attributes: + label: Please agree to the each of the following + options: + - label: I am a maintainer of the adapter being submited for Trusted status + - label: I have read both the [Trusted adapters](https://docs.getdbt.com/docs/trusted-adapters) and [Building a Trusted Adapter](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter) pages. + - label: I believe that the adapter currently meets the expectations given above + - label: I will ensure this adapter stays in compliance with the guidelines indefinitely + - label: I understand that dbt Labs reserves the right to remove an adapter from the trusted adapter list at any time, should any of the below guidelines not be met + validations: + required: true + + + + From 973269b433642194eda4c75b4255393c6b3f91aa Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 17 Jul 2023 14:22:10 -0400 Subject: [PATCH 008/103] reference soon-to-exist template --- .../adapter-development/8-trusting-a-new-adapter.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md b/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md index 9e39e5a8897..4d54fc2e90b 100644 --- a/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md +++ b/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md @@ -66,7 +66,7 @@ The adapter repository is: ## How to get an adapter verified? -To submit your adapter for consideration as a Trusted adapter, use the "trusted adapter" issue template on the docs.getdbt.com repository. This will prompt you to agree to the following checkboxes: +Open an issue on the [docs.getdbt.com GitHub repository](https://github.com/dbt-labs/docs.getdbt.com) using the "Add adapter to Trusted list" template. In addition to contact information, it will ask confirm that you agree to the following. 1. my adapter meet the guidelines given above 2. I will make best reasonable effort that this continues to be so From 3b6a324c1f9fecfcd830dc95b1dc6369d5766337 Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 24 Jul 2023 10:36:30 -0400 Subject: [PATCH 009/103] better organize --- website/docs/docs/supported-data-platforms.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md index bc2a5a119a5..be16a3f1a71 100644 --- a/website/docs/docs/supported-data-platforms.md +++ b/website/docs/docs/supported-data-platforms.md @@ -8,11 +8,13 @@ hide_table_of_contents: true dbt connects to and runs SQL against your database, warehouse, lake, or query engine. These SQL-speaking platforms are collectively referred to as _data platforms_. dbt connects with data platforms by using a dedicated adapter plugin for each. Plugins are built as Python modules that dbt Core discovers if they are installed on your system. Read [What are Adapters](/guides/dbt-ecosystem/adapter-development/1-what-are-adapters) for more info. -You can also [further configure](/reference/resource-configs/postgres-configs) your specific data platform to optimize performance. +You can [connect](/docs/connect-adapters) to adapters and data platforms either directly in the dbt Cloud user interface (UI) or install them manually using the command line (CLI). + +You can also further customize how dbt works with your specific data platform via configuration: see [Configuring Postgres](/reference/resource-configs/postgres-configs) for an example. ## Types of Adapters -You can [connect](/docs/connect-adapters) to adapters and data platforms either directly in the dbt Cloud user interface (UI) or install them manually using the command line (CLI). There are three types of adapters available today. The purpose of differentiation is to provide users with an easier means to evaluate adapter quality. +There are three types of adapters available today. The purpose of differentiation is to provide users with an easier means to evaluate adapter quality. - **Verified** — dbt Labs' strict [adapter program](/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter) assures users of trustworthy, tested, and regularly updated adapters for production use. Verified adapters earn a "Verified" status, providing users with trust and confidence. - **Trusted** — [Trusted adapters](trusted-adapters) are those where the adapter maintainers have agreed to meet a higher standard of quality. From 8ec3c94dcc70136bf14b281999c819b7fa750604 Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 24 Jul 2023 10:39:25 -0400 Subject: [PATCH 010/103] amyd feedback --- website/docs/docs/supported-data-platforms.md | 2 +- website/docs/docs/trusted-adapters.md | 2 +- website/docs/docs/verified-adapters.md | 2 ++ 3 files changed, 4 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md index be16a3f1a71..b9962dc8738 100644 --- a/website/docs/docs/supported-data-platforms.md +++ b/website/docs/docs/supported-data-platforms.md @@ -16,7 +16,7 @@ You can also further customize how dbt works with your specific data platform vi There are three types of adapters available today. The purpose of differentiation is to provide users with an easier means to evaluate adapter quality. -- **Verified** — dbt Labs' strict [adapter program](/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter) assures users of trustworthy, tested, and regularly updated adapters for production use. Verified adapters earn a "Verified" status, providing users with trust and confidence. +- **Verified** — [Verified adapters](verified-adapters) are those that have completed a rigorous verification process in collaboration with dbt Labs. - **Trusted** — [Trusted adapters](trusted-adapters) are those where the adapter maintainers have agreed to meet a higher standard of quality. - **Community** — [Community adapters](community-adapters) are open-source and maintained by community members. diff --git a/website/docs/docs/trusted-adapters.md b/website/docs/docs/trusted-adapters.md index 8b304d01b0d..f3ff07467c3 100644 --- a/website/docs/docs/trusted-adapters.md +++ b/website/docs/docs/trusted-adapters.md @@ -24,7 +24,7 @@ See [Building a Trusted Adapter](/guides/dbt-ecosystem/adapter-development/8-bui ### Trusted vs Verified -The Verification program (currently paused) exists to highlight adapters that meets both of the following criteria: +The Verification program exists to highlight adapters that meets both of the following criteria: - the guidelines given in the Trusted program, - formal agreements required for integration with dbt Cloud diff --git a/website/docs/docs/verified-adapters.md b/website/docs/docs/verified-adapters.md index d5611c7062f..8ec0c700ea4 100644 --- a/website/docs/docs/verified-adapters.md +++ b/website/docs/docs/verified-adapters.md @@ -8,6 +8,8 @@ The dbt Labs has a rigorous verified adapter program which provides reassurance These adapters then earn a "Verified" status so that users can have a certain level of trust and expectation when they use them. The adapters also have maintainers and we recommend using the adapter's verification status to determine its quality and health. +The verification process serves as the on-ramp to integration with dbt Cloud. As such, we restrict applicants to data platform vendors with whom we are already engaged. + To learn more, see [Verifying a new adapter](/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter). Here's the list of the verified data platforms that can connect to dbt and its latest version. From a82419469bd5acfc5c77d56a8595ba1b8c9fc7ab Mon Sep 17 00:00:00 2001 From: Anders Date: Mon, 31 Jul 2023 14:43:21 -0400 Subject: [PATCH 011/103] Apply suggestions from code review Co-authored-by: Jason Ganz --- .github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml | 8 ++++---- website/docs/docs/supported-data-platforms.md | 2 +- .../adapter-development/8-trusting-a-new-adapter.md | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml b/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml index b9d45621620..1706f1c0e6a 100644 --- a/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml +++ b/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml @@ -1,12 +1,12 @@ name: Add adapter to Trusted list description: > For adapter maintainers who wish to have theirs added to the list of [Trusted adapters](https://docs.getdbt.com/docs/trusted-adapters) -labels: ["adapter maintainers","developer blog"] +labels: ["adapter maintainers"] body: - type: markdown attributes: value: | - We're excited that you'd like to support your adapter formally as "Trusted"! This template will ensure that you are aware of the process and the guidelines. Additionally, that you can vouch that your adapter currently what is expected of a Trusted adapter + We're excited that you'd like to support your adapter formally as "Trusted"! This template will ensure that you are aware of the process and the guidelines. Additionally, that you can vouch that your adapter currently meets the standards of a Trusted adapter - type: input id: adapter-repo @@ -30,7 +30,7 @@ body: attributes: label: Which of these best describes you? options: - - label: I am a dbt Community member or partner contributing to the Developer Blog + - label: I am a dbt Community member - label: I work for the vendor on top of which the dbt adapter functions - type: checkboxes @@ -41,7 +41,7 @@ body: - label: I am a maintainer of the adapter being submited for Trusted status - label: I have read both the [Trusted adapters](https://docs.getdbt.com/docs/trusted-adapters) and [Building a Trusted Adapter](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter) pages. - label: I believe that the adapter currently meets the expectations given above - - label: I will ensure this adapter stays in compliance with the guidelines indefinitely +- label: I will ensure this adapter stays in compliance with the guidelines - label: I understand that dbt Labs reserves the right to remove an adapter from the trusted adapter list at any time, should any of the below guidelines not be met validations: required: true diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md index b9962dc8738..e1e14421b95 100644 --- a/website/docs/docs/supported-data-platforms.md +++ b/website/docs/docs/supported-data-platforms.md @@ -14,7 +14,7 @@ You can also further customize how dbt works with your specific data platform vi ## Types of Adapters -There are three types of adapters available today. The purpose of differentiation is to provide users with an easier means to evaluate adapter quality. +There are three types of adapters available today: - **Verified** — [Verified adapters](verified-adapters) are those that have completed a rigorous verification process in collaboration with dbt Labs. - **Trusted** — [Trusted adapters](trusted-adapters) are those where the adapter maintainers have agreed to meet a higher standard of quality. diff --git a/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md b/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md index 4d54fc2e90b..059f793fce1 100644 --- a/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md +++ b/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md @@ -9,7 +9,7 @@ The Trusted adapter program exists to allow adapter maintainers to demonstrate t Below are some categories with stuff. -By opt-ing into the below, you agree to this, and we take you at your word. dbt Labs reserves the right to remove an adapter from the trusted adapter list at any time, should any of the below guidelines not be met. +By opting into the below, you agree to this, and we take you at your word. dbt Labs reserves the right to remove an adapter from the trusted adapter list at any time, should any of the below guidelines not be met. ### Feature Completeness From 0391333fef0f4a5dac9ff7d26004fe8699f1698b Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 31 Jul 2023 14:47:27 -0400 Subject: [PATCH 012/103] address feedback --- ...-trusting-a-new-adapter.md => 8-building-a-trusted-adapter.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename website/docs/guides/dbt-ecosystem/adapter-development/{8-trusting-a-new-adapter.md => 8-building-a-trusted-adapter.md} (100%) diff --git a/website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md b/website/docs/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter.md similarity index 100% rename from website/docs/guides/dbt-ecosystem/adapter-development/8-trusting-a-new-adapter.md rename to website/docs/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter.md From 274cb783a3e2e764d4c2cf5cf0bd1a73d370e84f Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Mon, 31 Jul 2023 14:49:20 -0400 Subject: [PATCH 013/103] feedback --- .../8-building-a-trusted-adapter.md | 2 -- website/snippets/_adapters-trusted.md | 10 ---------- 2 files changed, 12 deletions(-) diff --git a/website/docs/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter.md b/website/docs/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter.md index 059f793fce1..b8cef0ea34a 100644 --- a/website/docs/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter.md +++ b/website/docs/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter.md @@ -7,8 +7,6 @@ The Trusted adapter program exists to allow adapter maintainers to demonstrate t ## What does it mean to be trusted -Below are some categories with stuff. - By opting into the below, you agree to this, and we take you at your word. dbt Labs reserves the right to remove an adapter from the trusted adapter list at any time, should any of the below guidelines not be met. ### Feature Completeness diff --git a/website/snippets/_adapters-trusted.md b/website/snippets/_adapters-trusted.md index 57e667e5198..7d961e62ee6 100644 --- a/website/snippets/_adapters-trusted.md +++ b/website/snippets/_adapters-trusted.md @@ -1,15 +1,5 @@
- - - - Date: Mon, 7 Aug 2023 07:56:40 -0500 Subject: [PATCH 014/103] Correct the definition of monotonically increasing The current definition of monotonically increasing refers to a linearly increasing function, not a monotonically increasing one. See the definition here: https://en.wikipedia.org/wiki/Monotonic_function --- website/docs/terms/monotonically-increasing.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/website/docs/terms/monotonically-increasing.md b/website/docs/terms/monotonically-increasing.md index 397e333942a..bf7e141a2cc 100644 --- a/website/docs/terms/monotonically-increasing.md +++ b/website/docs/terms/monotonically-increasing.md @@ -1,11 +1,11 @@ --- id: monotonically-increasing title: Monotonically increasing -description: Monotonicity means unchanging (think monotone). A monotonically-increasing value is a value which increases at a constant rate, for example the values 1, 2, 3, 4. +description: Monotonicity means unchanging (think monotone). A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. displayText: monotonically increasing -hoverSnippet: Monotonicity means unchanging (think monotone). A monotonically-increasing value is a value which increases at a constant rate, for example the values 1, 2, 3, 4. +hoverSnippet: Monotonicity means unchanging (think monotone). A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. --- -Monotonicity means unchanging (think monotone). A monotonically-increasing value is a value which increases at a constant rate, for example the values `[1, 2, 3, 4]`. +Monotonicity means unchanging (think monotone). A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences [1, 6, 7, 11, 131] or [2, 5, 5, 5, 6, 10]. -Monotonically-increasing values often appear in primary keys generated by production systems. In an analytics engineering context, you should avoid generating such values or assuming their existence in your models, because they make it more difficult to create an data model. Instead you should create a which is derived from the unique component(s) of a row. \ No newline at end of file +Monotonically-increasing values often appear in primary keys generated by production systems. In an analytics engineering context, you should avoid generating such values or assuming their existence in your models, because they make it more difficult to create an data model. Instead you should create a which is derived from the unique component(s) of a row. From cb8967f337a992d7345fe8dd7838f7316db56df7 Mon Sep 17 00:00:00 2001 From: Andrew Russell Date: Mon, 7 Aug 2023 08:00:37 -0500 Subject: [PATCH 015/103] Update monotonically-increasing.md --- website/docs/terms/monotonically-increasing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/terms/monotonically-increasing.md b/website/docs/terms/monotonically-increasing.md index bf7e141a2cc..0bc8db4b431 100644 --- a/website/docs/terms/monotonically-increasing.md +++ b/website/docs/terms/monotonically-increasing.md @@ -6,6 +6,6 @@ displayText: monotonically increasing hoverSnippet: Monotonicity means unchanging (think monotone). A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. --- -Monotonicity means unchanging (think monotone). A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences [1, 6, 7, 11, 131] or [2, 5, 5, 5, 6, 10]. +Monotonicity means unchanging (think monotone). A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences `[1, 6, 7, 11, 131]` or `[2, 5, 5, 5, 6, 10]`. Monotonically-increasing values often appear in primary keys generated by production systems. In an analytics engineering context, you should avoid generating such values or assuming their existence in your models, because they make it more difficult to create an data model. Instead you should create a which is derived from the unique component(s) of a row. From 7a1fa190a1ebc82cd987d043a96640416d583ff7 Mon Sep 17 00:00:00 2001 From: Andrew Russell Date: Tue, 8 Aug 2023 07:25:32 -0500 Subject: [PATCH 016/103] Update monotonically-increasing.md --- website/docs/terms/monotonically-increasing.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/terms/monotonically-increasing.md b/website/docs/terms/monotonically-increasing.md index 0bc8db4b431..6d1264237ab 100644 --- a/website/docs/terms/monotonically-increasing.md +++ b/website/docs/terms/monotonically-increasing.md @@ -1,11 +1,11 @@ --- id: monotonically-increasing title: Monotonically increasing -description: Monotonicity means unchanging (think monotone). A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. +description: A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. displayText: monotonically increasing -hoverSnippet: Monotonicity means unchanging (think monotone). A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. +hoverSnippet: A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. --- -Monotonicity means unchanging (think monotone). A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences `[1, 6, 7, 11, 131]` or `[2, 5, 5, 5, 6, 10]`. +Monotonicity means unchanging (think monotone); a monotonic sequence is a sequence where the order of the value of the elements does not change. In other words, a monotonically-increasing sequence is a sequence whose values do not decrease. For example the sequences `[1, 6, 7, 11, 131]` or `[2, 5, 5, 5, 6, 10]`. Monotonically-increasing values often appear in primary keys generated by production systems. In an analytics engineering context, you should avoid generating such values or assuming their existence in your models, because they make it more difficult to create an data model. Instead you should create a which is derived from the unique component(s) of a row. From 0a203cac2fff91f9cfd1fad659c79ba68a5c7ec8 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 10 Aug 2023 15:42:35 -0400 Subject: [PATCH 017/103] Deprecate v1.0 from docs --- contributing/single-sourcing-content.md | 2 +- website/dbt-versions.js | 4 -- website/docs/docs/build/incremental-models.md | 6 --- .../building-models/python-models.md | 2 +- .../connect-data-platform/bigquery-setup.md | 50 ------------------- .../core/connect-data-platform/spark-setup.md | 4 -- .../faqs/Core/install-python-compatibility.md | 6 --- website/docs/guides/legacy/best-practices.md | 6 --- .../docs/reference/node-selection/methods.md | 5 -- .../docs/reference/node-selection/syntax.md | 11 ---- .../resource-configs/persist_docs.md | 2 +- .../reference/resource-properties/config.md | 6 --- website/docs/reference/source-configs.md | 8 --- 13 files changed, 3 insertions(+), 109 deletions(-) diff --git a/contributing/single-sourcing-content.md b/contributing/single-sourcing-content.md index ca27372e5bc..fe64ce6521a 100644 --- a/contributing/single-sourcing-content.md +++ b/contributing/single-sourcing-content.md @@ -90,7 +90,7 @@ This component can be added directly to a markdown file in a similar way as othe Both properties can be used together to set a range where the content should show. In the example below, this content will only show if the selected version is between **0.21** and **1.0**: ```markdown - + Versioned content here diff --git a/website/dbt-versions.js b/website/dbt-versions.js index a59822101e9..655d4f02b7b 100644 --- a/website/dbt-versions.js +++ b/website/dbt-versions.js @@ -23,10 +23,6 @@ exports.versions = [ version: "1.1", EOLDate: "2023-04-28", }, - { - version: "1.0", - EOLDate: "2022-12-03" - }, ] exports.versionedPages = [ diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md index 89115652a9c..d3c3f25890b 100644 --- a/website/docs/docs/build/incremental-models.md +++ b/website/docs/docs/build/incremental-models.md @@ -79,12 +79,6 @@ A `unique_key` enables updating existing rows instead of just appending new rows Not specifying a `unique_key` will result in append-only behavior, which means dbt inserts all rows returned by the model's SQL into the preexisting target table without regard for whether the rows represent duplicates. - - -The optional `unique_key` parameter specifies a field that can uniquely identify each row within your model. You can define `unique_key` in a configuration block at the top of your model. If your model doesn't contain a single field that is unique, but rather a combination of columns, we recommend that you create a single column that can serve as a unique identifier (by concatenating and hashing those columns), and pass it into your model's configuration. - - - The optional `unique_key` parameter specifies a field (or combination of fields) that define the grain of your model. That is, the field(s) identify a single unique row. You can define `unique_key` in a configuration block at the top of your model, and it can be a single column name or a list of column names. diff --git a/website/docs/docs/building-a-dbt-project/building-models/python-models.md b/website/docs/docs/building-a-dbt-project/building-models/python-models.md index 1aab8ac7a92..9c1127bb9f2 100644 --- a/website/docs/docs/building-a-dbt-project/building-models/python-models.md +++ b/website/docs/docs/building-a-dbt-project/building-models/python-models.md @@ -19,7 +19,7 @@ Below, you'll see sections entitled "❓ **Our questions**." We are excited to h dbt Python ("dbt-py") models will help you solve use cases that can't be solved with SQL. You can perform analyses using tools available in the open source Python ecosystem, including state-of-the-art packages for data science and statistics. Before, you would have needed separate infrastructure and orchestration to run Python transformations in production. By defining your Python transformations in dbt, they're just models in your project, with all the same capabilities around testing, documentation, and lineage. - + Python models are supported in dbt Core 1.3 and above. Learn more about [upgrading your version in dbt Cloud](https://docs.getdbt.com/docs/dbt-cloud/cloud-configuring-dbt-cloud/cloud-upgrading-dbt-versions) and [upgrading dbt Core versions](https://docs.getdbt.com/docs/core-versions#upgrading-to-new-patch-versions). diff --git a/website/docs/docs/core/connect-data-platform/bigquery-setup.md b/website/docs/docs/core/connect-data-platform/bigquery-setup.md index b0fc9fa7cf0..a34a4a0def2 100644 --- a/website/docs/docs/core/connect-data-platform/bigquery-setup.md +++ b/website/docs/docs/core/connect-data-platform/bigquery-setup.md @@ -317,56 +317,6 @@ my-profile: - - -BigQuery supports query timeouts. By default, the timeout is set to 300 seconds. If a dbt model takes longer than this timeout to complete, then BigQuery may cancel the query and issue the following error: - -``` - Operation did not complete within the designated timeout. -``` - -To change this timeout, use the `timeout_seconds` configuration: - - - -```yaml -my-profile: - target: dev - outputs: - dev: - type: bigquery - method: oauth - project: abc-123 - dataset: my_dataset - timeout_seconds: 600 # 10 minutes -``` - - - -The `retries` profile configuration designates the number of times dbt should retry queries that result in unhandled server errors. This configuration is only specified for BigQuery targets. Example: - - - -```yaml -# This example target will retry BigQuery queries 5 -# times with a delay. If the query does not succeed -# after the fifth attempt, then dbt will raise an error - -my-profile: - target: dev - outputs: - dev: - type: bigquery - method: oauth - project: abc-123 - dataset: my_dataset - retries: 5 -``` - - - - - ### Dataset locations The location of BigQuery datasets can be configured using the `location` configuration in a BigQuery profile. diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md index 2e3b5a66de8..c3886f37e9e 100644 --- a/website/docs/docs/core/connect-data-platform/spark-setup.md +++ b/website/docs/docs/core/connect-data-platform/spark-setup.md @@ -207,8 +207,6 @@ your_profile_name: - - ## Optional configurations ### Retries @@ -227,8 +225,6 @@ connect_retries: 3 - - ## Caveats ### Usage with EMR diff --git a/website/docs/faqs/Core/install-python-compatibility.md b/website/docs/faqs/Core/install-python-compatibility.md index d24466f4990..4d6066d931b 100644 --- a/website/docs/faqs/Core/install-python-compatibility.md +++ b/website/docs/faqs/Core/install-python-compatibility.md @@ -23,12 +23,6 @@ The latest version of `dbt-core` is compatible with Python versions 3.7, 3.8, 3. - - -As of v1.0, `dbt-core` is compatible with Python versions 3.7, 3.8, and 3.9. - - - Adapter plugins and their dependencies are not always compatible with the latest version of Python. For example, dbt-snowflake v0.19 is not compatible with Python 3.9, but dbt-snowflake versions 0.20+ are. New dbt minor versions will add support for new Python3 minor versions as soon as all dependencies can support it. In turn, dbt minor versions will drop support for old Python3 minor versions right before they reach [end of life](https://endoflife.date/python). diff --git a/website/docs/guides/legacy/best-practices.md b/website/docs/guides/legacy/best-practices.md index 018d48ba181..10e02271518 100644 --- a/website/docs/guides/legacy/best-practices.md +++ b/website/docs/guides/legacy/best-practices.md @@ -159,12 +159,6 @@ dbt test --select result:fail --exclude --defer --state path/to/p > Note: If you're using the `--state target/` flag, `result:error` and `result:fail` flags can only be selected concurrently(in the same command) if using the `dbt build` command. `dbt test` will overwrite the `run_results.json` from `dbt run` in a previous command invocation. - - -Only supported by v1.1 or newer. - - - Only supported by v1.1 or newer. diff --git a/website/docs/reference/node-selection/methods.md b/website/docs/reference/node-selection/methods.md index ff86d60c06a..ca66b00044f 100644 --- a/website/docs/reference/node-selection/methods.md +++ b/website/docs/reference/node-selection/methods.md @@ -252,11 +252,6 @@ $ dbt seed --select result:error --state path/to/artifacts # run all seeds that ``` ### The "source_status" method - - -Supported in v1.1 or newer. - - diff --git a/website/docs/reference/node-selection/syntax.md b/website/docs/reference/node-selection/syntax.md index 1a43a32e2bc..a60d23cd16f 100644 --- a/website/docs/reference/node-selection/syntax.md +++ b/website/docs/reference/node-selection/syntax.md @@ -174,12 +174,6 @@ $ dbt run --select result:+ state:modified+ --defer --state ./ - -Only supported by v1.1 or newer. - - - Only supported by v1.1 or newer. @@ -199,11 +193,6 @@ dbt build --select source_status:fresher+ For more example commands, refer to [Pro-tips for workflows](/guides/legacy/best-practices.md#pro-tips-for-workflows). ### The "source_status" status - - -Only supported by v1.1 or newer. - - diff --git a/website/docs/reference/resource-configs/persist_docs.md b/website/docs/reference/resource-configs/persist_docs.md index 6facf3945cb..7134972d2ca 100644 --- a/website/docs/reference/resource-configs/persist_docs.md +++ b/website/docs/reference/resource-configs/persist_docs.md @@ -151,7 +151,7 @@ Some known issues and limitations: - + - Column names that must be quoted, such as column names containing special characters, will cause runtime errors if column-level `persist_docs` is enabled. This is fixed in v1.2. diff --git a/website/docs/reference/resource-properties/config.md b/website/docs/reference/resource-properties/config.md index 32143c1da07..1d3a2de6592 100644 --- a/website/docs/reference/resource-properties/config.md +++ b/website/docs/reference/resource-properties/config.md @@ -108,12 +108,6 @@ version: 2 - - -We have added support for the `config` property on sources in dbt Core v1.1 - - - diff --git a/website/docs/reference/source-configs.md b/website/docs/reference/source-configs.md index ef428f5934c..49390c299c8 100644 --- a/website/docs/reference/source-configs.md +++ b/website/docs/reference/source-configs.md @@ -71,14 +71,6 @@ Sources can be configured via a `config:` block within their `.yml` definitions, - - -Sources can be configured from the `dbt_project.yml` file under the `sources:` key. This configuration is most useful for configuring sources imported from [a package](package-management). You can disable sources imported from a package to prevent them from rendering in the documentation, or to prevent [source freshness checks](/docs/build/sources#snapshotting-source-data-freshness) from running on source tables imported from packages. - -Unlike other resource types, sources do not yet support a `config` property. It is not possible to (re)define source configs hierarchically across multiple YAML files. - - - ### Examples #### Disable all sources imported from a package To apply a configuration to all sources included from a [package](/docs/build/packages), From 78c2bc90cad28db8371ee2755dc79ab077bdaae3 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Fri, 11 Aug 2023 11:42:48 -0400 Subject: [PATCH 018/103] Updates to billing FAQs --- website/docs/docs/cloud/billing.md | 2 +- website/docs/faqs/Accounts/cloud-upgrade-instructions.md | 6 +++--- website/docs/faqs/Accounts/payment-accepted.md | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/website/docs/docs/cloud/billing.md b/website/docs/docs/cloud/billing.md index 76b37560be3..105586c4250 100644 --- a/website/docs/docs/cloud/billing.md +++ b/website/docs/docs/cloud/billing.md @@ -15,7 +15,7 @@ As a customer, you pay for the number of seats you have and the amount of usage dbt Cloud considers a Successful Model Built as any model that is successfully built via a run through dbt Cloud’s orchestration functionality in a dbt Cloud deployment environment. Models are counted when built and run. This includes any jobs run via dbt Cloud's scheduler, CI builds (jobs triggered by pull requests), runs kicked off via the dbt Cloud API, and any successor dbt Cloud tools with similar functionality. This also includes models that are successfully built even when a run may fail to complete. For example, you may have a job that contains 100 models and on one of its runs, 51 models are successfully built and then the job fails. In this situation, only 51 models would be counted. -Any models built in a dbt Cloud development environment (for example, via the IDE) do not count towards your usage. Tests, seeds, and snapshots also do not count. +Any models built in a dbt Cloud development environment (for example, via the IDE) do not count towards your usage. Tests, seeds, ephemeral models, and snapshots also do not count. ### What counts as a seat license? diff --git a/website/docs/faqs/Accounts/cloud-upgrade-instructions.md b/website/docs/faqs/Accounts/cloud-upgrade-instructions.md index 76d03870478..c958d86b1d3 100644 --- a/website/docs/faqs/Accounts/cloud-upgrade-instructions.md +++ b/website/docs/faqs/Accounts/cloud-upgrade-instructions.md @@ -38,7 +38,7 @@ To unlock your account and select a plan, review the following guidance per plan 2. To unlock your account and continue using the Team plan, you need to enter your payment details. 3. Go to **Payment Information** and click **Edit** on the right. 4. Enter your payment details and click **Save**. -5. This automatically unlocks your dbt Cloud account, and you can now enjoy the benefits of the Team plan. πŸŽ‰ +5. This automatically unlocks your dbt Cloud account, and you can now enjoy the benefits of the Team plan. πŸŽ‰ @@ -59,7 +59,7 @@ For commonly asked billings questions, refer to the dbt Cloud [pricing page](htt
How does billing work?
-
Team plans are billed monthly on the credit card used to sign up, based on developer seat count. You’ll also be sent a monthly receipt to the billing email of your choice. You can change any billing information in your Account Settings -> Billing page.



+
Team plans are billed monthly on the credit card used to sign up, based on [developer seat count and usage](/docs/cloud/billing). You’ll also be sent a monthly receipt to the billing email of your choice. You can change any billing information in your Account Settings -> Billing page.



Enterprise plan customers are billed annually based on the number of developer seats, as well as any additional services + features in your chosen plan.
@@ -75,7 +75,7 @@ For commonly asked billings questions, refer to the dbt Cloud [pricing page](htt
Can I pay by invoice?
-
At present, dbt Cloud Team plan payments must be made via credit card, and by default they will be billed monthly based on the number of developer seats.



+
At present, dbt Cloud Team plan payments must be made via credit card, and by default they will be billed monthly based on the number of [developer seats and usage](/docs/cloud/billing).



We don’t have any plans to do invoicing for Team plan accounts in the near future, but we do currently support invoices for companies on the dbt Cloud Enterprise plan. Feel free to contact us to build your Enterprise pricing plan.
diff --git a/website/docs/faqs/Accounts/payment-accepted.md b/website/docs/faqs/Accounts/payment-accepted.md index 2e26063c684..1ddbdbd9e10 100644 --- a/website/docs/faqs/Accounts/payment-accepted.md +++ b/website/docs/faqs/Accounts/payment-accepted.md @@ -5,6 +5,6 @@ sidebar_label: 'Can I pay invoice' id: payment-accepted --- -Presently for Team plans, self-service dbt Cloud payments must be made via credit card and by default, they will be billed monthly based on the number of active developer seats. +Presently for Team plans, self-service dbt Cloud payments must be made via credit card and by default, they will be billed monthly based on the number of [active developer seats and usage](/docs/cloud/billing). We don't have any plans to do invoicing for self-service teams in the near future, but we *do* currently support invoices for companies on the **dbt Cloud Enterprise plan.** Feel free to [contact us](https://www.getdbt.com/contact) to build your Enterprise pricing. From 5a0ccae04756dba771bbc2883ca78e64f0541cc4 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Fri, 11 Aug 2023 12:03:09 -0400 Subject: [PATCH 019/103] Apply suggestions from code review Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/faqs/Accounts/cloud-upgrade-instructions.md | 2 +- website/docs/faqs/Accounts/payment-accepted.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/faqs/Accounts/cloud-upgrade-instructions.md b/website/docs/faqs/Accounts/cloud-upgrade-instructions.md index c958d86b1d3..08e4f4c5334 100644 --- a/website/docs/faqs/Accounts/cloud-upgrade-instructions.md +++ b/website/docs/faqs/Accounts/cloud-upgrade-instructions.md @@ -75,7 +75,7 @@ For commonly asked billings questions, refer to the dbt Cloud [pricing page](htt
Can I pay by invoice?
-
At present, dbt Cloud Team plan payments must be made via credit card, and by default they will be billed monthly based on the number of [developer seats and usage](/docs/cloud/billing).



+
Currently, dbt Cloud Team plan payments must be made with a credit card, and by default they will be billed monthly based on the number of [developer seats and usage](/docs/cloud/billing).



We don’t have any plans to do invoicing for Team plan accounts in the near future, but we do currently support invoices for companies on the dbt Cloud Enterprise plan. Feel free to contact us to build your Enterprise pricing plan.
diff --git a/website/docs/faqs/Accounts/payment-accepted.md b/website/docs/faqs/Accounts/payment-accepted.md index 1ddbdbd9e10..c0e949833a2 100644 --- a/website/docs/faqs/Accounts/payment-accepted.md +++ b/website/docs/faqs/Accounts/payment-accepted.md @@ -5,6 +5,6 @@ sidebar_label: 'Can I pay invoice' id: payment-accepted --- -Presently for Team plans, self-service dbt Cloud payments must be made via credit card and by default, they will be billed monthly based on the number of [active developer seats and usage](/docs/cloud/billing). +Currently for Team plans, self-service dbt Cloud payments must be made with a credit card and by default, they will be billed monthly based on the number of [active developer seats and usage](/docs/cloud/billing). We don't have any plans to do invoicing for self-service teams in the near future, but we *do* currently support invoices for companies on the **dbt Cloud Enterprise plan.** Feel free to [contact us](https://www.getdbt.com/contact) to build your Enterprise pricing. From f3c2cec302a6bf726537ab461f54ed743d2b6abd Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Fri, 11 Aug 2023 12:03:20 -0400 Subject: [PATCH 020/103] Update website/docs/faqs/Accounts/cloud-upgrade-instructions.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/faqs/Accounts/cloud-upgrade-instructions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/faqs/Accounts/cloud-upgrade-instructions.md b/website/docs/faqs/Accounts/cloud-upgrade-instructions.md index 08e4f4c5334..f8daf393f9b 100644 --- a/website/docs/faqs/Accounts/cloud-upgrade-instructions.md +++ b/website/docs/faqs/Accounts/cloud-upgrade-instructions.md @@ -59,7 +59,7 @@ For commonly asked billings questions, refer to the dbt Cloud [pricing page](htt
How does billing work?
-
Team plans are billed monthly on the credit card used to sign up, based on [developer seat count and usage](/docs/cloud/billing). You’ll also be sent a monthly receipt to the billing email of your choice. You can change any billing information in your Account Settings -> Billing page.



+
Team plans are billed monthly on the credit card used to sign up, based on [developer seat count and usage](/docs/cloud/billing). You’ll also be sent a monthly receipt to the billing email of your choice. You can change any billing information in your Account Settings > Billing page.



Enterprise plan customers are billed annually based on the number of developer seats, as well as any additional services + features in your chosen plan.
From 2f8399251b2911f82c8df34964ca003c9fd8b7c0 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Fri, 11 Aug 2023 15:26:50 -0400 Subject: [PATCH 021/103] Adding breaking changes FAQ --- .../docs/docs/collaborate/govern/model-contracts.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/website/docs/docs/collaborate/govern/model-contracts.md b/website/docs/docs/collaborate/govern/model-contracts.md index 339098adbdc..0da2cb19e37 100644 --- a/website/docs/docs/collaborate/govern/model-contracts.md +++ b/website/docs/docs/collaborate/govern/model-contracts.md @@ -112,3 +112,14 @@ In some cases, you can replace a test with its equivalent constraint. This has t Currently, dbt contracts apply to **all** columns defined in a model, and they require declaring explicit expectations about **all** of those columns. The explicit declaration of a contract is not an accidentβ€”it's very much the intent of this feature. We are investigating the feasibility of supporting "inferred" or "partial" contracts in the future. This would enable you to define constraints and strict data typing for a subset of columns, while still detecting breaking changes on other columns by comparing against the same model in production. If you're interested, please upvote or comment on [dbt-core#7432](https://github.com/dbt-labs/dbt-core/issues/7432). + +### How are breaking changes handled? + +When comparing to a previous project state, dbt will look for breaking changes that could impact downstream consumers. If breaking changes are detected, dbt will present a contract error. + +Breaking changes include: +- Removing an existing column +- Changing the `data_type` of an existing column +- Removing or modifying one of the `constraints` on an existing column (dbt v1.6 or higher) + +More details are available in the [contract reference](/reference/resource-configs/contract#detecting-breaking-changes). \ No newline at end of file From 0b94e375435982f838b31decec0e0692331c7b75 Mon Sep 17 00:00:00 2001 From: Greg McKeon Date: Fri, 25 Aug 2023 10:30:53 -0400 Subject: [PATCH 022/103] Update cloud-cli-installation.md Add instructions for creating dbt-cloud.yml --- website/docs/docs/cloud/cloud-cli-installation.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/website/docs/docs/cloud/cloud-cli-installation.md b/website/docs/docs/cloud/cloud-cli-installation.md index 5f03c9fca92..5e6ddf1e6b6 100644 --- a/website/docs/docs/cloud/cloud-cli-installation.md +++ b/website/docs/docs/cloud/cloud-cli-installation.md @@ -78,6 +78,8 @@ Follow the same process in [Installing dbt Cloud CLI](#manually-install-windows- > $ pwd /Users/user/dbt-projects/jaffle_shop +> $ echo "project-id: ''" > test.yml + > $ cat dbt_cloud.yml project-id: '123456' ``` From 9eed58dd54cc6806026af38e0ce79141f600e275 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Fri, 25 Aug 2023 15:47:08 +0100 Subject: [PATCH 023/103] add add'l context --- website/docs/docs/cloud/cloud-cli-installation.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/cloud-cli-installation.md b/website/docs/docs/cloud/cloud-cli-installation.md index 5e6ddf1e6b6..b464ef74f48 100644 --- a/website/docs/docs/cloud/cloud-cli-installation.md +++ b/website/docs/docs/cloud/cloud-cli-installation.md @@ -41,6 +41,8 @@ Follow the same process in [Installing dbt Cloud CLI](#manually-install-windows- ## Setting up the CLI +The following instructions are for setting up the dbt Cloud CLI. The `$` aren't part of the command, they tell you that you need to input this command. For example, `$ dbt run` means you should type `dbt run` into your terminal. + 1. Ensure that you have created a project in [dbt Cloud](https://cloud.getdbt.com/). 2. Ensure that your personal [development credentials](https://cloud.getdbt.com/settings/profile/credentials) are set on the project. @@ -72,7 +74,7 @@ Follow the same process in [Installing dbt Cloud CLI](#manually-install-windows- > $ cd ~/dbt-projects/jaffle_shop ``` -7. Create a dbt_cloud.yml in the root project directory. The file is required to have a `project-id` field with a valid [project ID](#glossary): +7. Create a dbt_cloud.yml in the root project directory. The file is required to have a `project-id` field with a valid [project ID](#glossary). Enter the following three commands: ```bash > $ pwd From 13fdf6fb34fa81e9b86c61f75b160fa18c1cd640 Mon Sep 17 00:00:00 2001 From: Greg McKeon Date: Fri, 25 Aug 2023 15:53:05 -0400 Subject: [PATCH 024/103] Update cloud-cli-installation.md --- website/docs/docs/cloud/cloud-cli-installation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/cloud-cli-installation.md b/website/docs/docs/cloud/cloud-cli-installation.md index b464ef74f48..5af9b7a173a 100644 --- a/website/docs/docs/cloud/cloud-cli-installation.md +++ b/website/docs/docs/cloud/cloud-cli-installation.md @@ -80,7 +80,7 @@ The following instructions are for setting up the dbt Cloud CLI. The `$` aren't > $ pwd /Users/user/dbt-projects/jaffle_shop -> $ echo "project-id: ''" > test.yml +> $ echo "project-id: ''" > dbt_cloud.yml > $ cat dbt_cloud.yml project-id: '123456' From f52294abe70ba7e1fc706e41b59e94e8a9054c10 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 30 Aug 2023 17:35:28 +0100 Subject: [PATCH 025/103] adding more weight to test --- website/docs/docs/build/groups.md | 2 +- website/docs/docs/build/projects.md | 1 + website/docs/docs/build/tests.md | 6 ++++-- website/sidebars.js | 10 +++++----- 4 files changed, 11 insertions(+), 8 deletions(-) diff --git a/website/docs/docs/build/groups.md b/website/docs/docs/build/groups.md index aa33db07ccc..7ac5337ba0d 100644 --- a/website/docs/docs/build/groups.md +++ b/website/docs/docs/build/groups.md @@ -1,6 +1,6 @@ --- title: "Add groups to your DAG" -sidebar_title: "Groups" +sidebar_label: "Groups" id: "groups" description: "When you define groups in dbt projects, you turn implicit relationships into an explicit grouping." keywords: diff --git a/website/docs/docs/build/projects.md b/website/docs/docs/build/projects.md index a7ca3638590..0d7dd889fa6 100644 --- a/website/docs/docs/build/projects.md +++ b/website/docs/docs/build/projects.md @@ -18,6 +18,7 @@ At a minimum, all a project needs is the `dbt_project.yml` project configuration | [sources](/docs/build/sources) | A way to name and describe the data loaded into your warehouse by your Extract and Load tools. | | [exposures](/docs/build/exposures) | A way to define and describe a downstream use of your project. | | [metrics](/docs/build/metrics) | A way for you to define metrics for your project. | +| [groups](/docs/build/groups) | Groups enable collaborative node organization in restricted collections. | | [analysis](/docs/build/analyses) | A way to organize analytical SQL queries in your project such as the general ledger from your QuickBooks. | When building out the structure of your project, you should consider these impacts on your organization's workflow: diff --git a/website/docs/docs/build/tests.md b/website/docs/docs/build/tests.md index 1a40dd42b53..e40d180ee9f 100644 --- a/website/docs/docs/build/tests.md +++ b/website/docs/docs/build/tests.md @@ -1,10 +1,12 @@ --- title: "Add tests to your DAG" -sidebar_title: "Tests" +sidebar_label: "Tests" description: "Read this tutorial to learn how to use tests when building in dbt." id: "tests" +keywords: + - test, tests, testing --- - +# Add tests to your DAG ## Related reference docs * [Test command](/reference/commands/test) * [Test properties](/reference/resource-properties/tests) diff --git a/website/sidebars.js b/website/sidebars.js index d1a6c4664e7..20071752310 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -228,7 +228,6 @@ const sidebarSettings = { label: "Build your DAG", collapsed: true, items: [ - "docs/build/sources", { type: "category", label: "Models", @@ -238,11 +237,15 @@ const sidebarSettings = { "docs/build/python-models", ], }, - "docs/build/seeds", "docs/build/snapshots", + "docs/build/seeds", + "docs/build/tests", + "docs/build/jinja-macros", + "docs/build/sources", "docs/build/exposures", "docs/build/metrics", "docs/build/groups", + "docs/build/analyses", ], }, { @@ -291,7 +294,6 @@ const sidebarSettings = { label: "Enhance your models", collapsed: true, items: [ - "docs/build/tests", "docs/build/materializations", "docs/build/incremental-models", ], @@ -301,11 +303,9 @@ const sidebarSettings = { label: "Enhance your code", collapsed: true, items: [ - "docs/build/jinja-macros", "docs/build/project-variables", "docs/build/environment-variables", "docs/build/packages", - "docs/build/analyses", "docs/build/hooks-operations", ], }, From 1d88c9d6ec3ae79d3100b12b2330c404c6815bdb Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 30 Aug 2023 18:16:42 +0100 Subject: [PATCH 026/103] add keyword --- website/docs/docs/build/tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/tests.md b/website/docs/docs/build/tests.md index e40d180ee9f..aff8b469b6b 100644 --- a/website/docs/docs/build/tests.md +++ b/website/docs/docs/build/tests.md @@ -4,7 +4,7 @@ sidebar_label: "Tests" description: "Read this tutorial to learn how to use tests when building in dbt." id: "tests" keywords: - - test, tests, testing + - test, tests, testing, dag --- # Add tests to your DAG ## Related reference docs From 0873110cfba01584307f12847cb8ff7e325cbbd5 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Wed, 30 Aug 2023 17:10:19 -0400 Subject: [PATCH 027/103] Multi-cell doc updates --- .../cloud/about-cloud/regions-ip-addresses.md | 9 ++++++++- .../set-up-sso-google-workspace.md | 2 +- website/docs/docs/deploy/webhooks.md | 20 ++++++++++++------- website/snippets/auth0-uri.md | 3 ++- 4 files changed, 24 insertions(+), 10 deletions(-) diff --git a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md index bc8c180f2fd..caeb0203a5e 100644 --- a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md +++ b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md @@ -11,10 +11,17 @@ dbt Cloud is [hosted](/docs/cloud/about-cloud/architecture) in multiple regions | Region | Location | Access URL | IP addresses | Developer plan | Team plan | Enterprise plan | |--------|----------|------------|--------------|----------------|-----------|-----------------| -| North America [^1] | AWS us-east-1 (N. Virginia) | cloud.getdbt.com | 52.45.144.63
54.81.134.249
52.22.161.231 | βœ… | βœ… | βœ… | +| North America multi-tenant [^1] | AWS us-east-1 (N. Virginia) | cloud.getdbt.com | 52.45.144.63
54.81.134.249
52.22.161.231 | βœ… | βœ… | βœ… | +| North America Cell 1 [^1] | AWS use-east-1 (N.Virginia) | {account prefix}.us1.dbt.com | [Located in Account Settings](#locating-your-dbt-cloud-ip-addresses) | ❌ | ❌ | ❌ | | EMEA [^1] | AWS eu-central-1 (Frankfurt) | emea.dbt.com | 3.123.45.39
3.126.140.248
3.72.153.148 | ❌ | ❌ | βœ… | | APAC [^1] | AWS ap-southeast-2 (Sydney)| au.dbt.com | 52.65.89.235
3.106.40.33
13.239.155.206
| ❌ | ❌ | βœ… | | Virtual Private dbt or Single tenant | Customized | Customized | Ask [Support](/community/resources/getting-help#dbt-cloud-support) for your IPs | ❌ | ❌ | βœ… | [^1]: These regions support [multi-tenant](/docs/cloud/about-cloud/tenancy) deployment environments hosted by dbt Labs. + +### Locating your dbt Cloud IP addresses + +There are two ways to view your dbt Cloud IP addresses: +- If no projects exist in the account, create a new project, and the IP addresses will be displayed during the **Configure your environment** steps. +- If you have an existing project, navigate to **Account Settings** and ensure you are in the **Projects** pane. Click on a project name, and the **Project Settings** window will open. Locate the **Connection** field and click on the name. Scroll down to the **Settings**, and the first text block lists your IP addresses. diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md b/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md index a206d359270..19779baf615 100644 --- a/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md +++ b/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md @@ -49,7 +49,7 @@ Client Secret for use in dbt Cloud. | **Application type** | internal | required | | **Application name** | dbt Cloud | required | | **Application logo** | Download the logo here | optional | -| **Authorized domains** | `getdbt.com` (US) `dbt.com` (EMEA or AU) | If deploying into a VPC, use the domain for your deployment | +| **Authorized domains** | `getdbt.com` (US multi-tenant) `getdbt.com` and `dbt.com`(US Cell 1) `dbt.com` (EMEA or AU) | If deploying into a VPC, use the domain for your deployment | | **Scopes** | `email, profile, openid` | The default scopes are sufficient | diff --git a/website/docs/docs/deploy/webhooks.md b/website/docs/docs/deploy/webhooks.md index b4ce7195363..069e7a3e283 100644 --- a/website/docs/docs/deploy/webhooks.md +++ b/website/docs/docs/deploy/webhooks.md @@ -167,7 +167,7 @@ An example of a webhook payload for an errored run: You can use the dbt Cloud API to create new webhooks that you want to subscribe to, get detailed information about your webhooks, and to manage the webhooks that are associated with your account. The following sections describe the API endpoints you can use for this. :::info Access URLs -dbt Cloud is hosted in multiple regions in the world and each region has a different access URL. People on Enterprise plans can choose to have their account hosted in any one of these regions. This section uses `cloud.getdbt.com` (which is for North America) as part of the endpoint but your access URL might be different. For a complete list of available dbt Cloud access URLs, refer to [Regions & IP addresses](/docs/cloud/about-cloud/regions-ip-addresses). +dbt Cloud is hosted in multiple regions in the world and each region has a different access URL. People on Enterprise plans can choose to have their account hosted in any one of these regions. For a complete list of available dbt Cloud access URLs, refer to [Regions & IP addresses](/docs/cloud/about-cloud/regions-ip-addresses). ::: ### List all webhook subscriptions @@ -175,12 +175,13 @@ List all webhooks that are available from a specific dbt Cloud account. #### Request ```shell -GET https://cloud.getdbt.com/api/v3/accounts/{account_id}/webhooks/subscriptions +GET https://{your access URL}/api/v3/accounts/{account_id}/webhooks/subscriptions ``` #### Path parameters | Name | Description | |------------|--------------------------------------| +| `your access URL` | The login URL for your dbt Cloud account. | | `account_id` | The dbt Cloud account the webhooks are associated with. | #### Response sample @@ -265,11 +266,12 @@ Get detailed information about a specific webhook. #### Request ```shell -GET https://cloud.getdbt.com/api/v3/accounts/{account_id}/webhooks/subscription/{webhook_id} +GET https://{your access URL}/api/v3/accounts/{account_id}/webhooks/subscription/{webhook_id} ``` #### Path parameters | Name | Description | |------------|--------------------------------------| +| `your access URL` | The login URL for your dbt Cloud account. | | `account_id` | The dbt Cloud account the webhook is associated with. | | `webhook_id` | The webhook you want detailed information on. | @@ -322,7 +324,7 @@ Create a new outbound webhook and specify the endpoint URL that will be subscrib #### Request sample ```shell -POST https://cloud.getdbt.com/api/v3/accounts/{account_id}/webhooks/subscriptions +POST https://{your access URL}/api/v3/accounts/{account_id}/webhooks/subscriptions ``` ```json @@ -344,6 +346,7 @@ POST https://cloud.getdbt.com/api/v3/accounts/{account_id}/webhooks/subscription #### Path parameters | Name | Description | | --- | --- | +| `your access URL` | The login URL for your dbt Cloud account. | | `account_id` | The dbt Cloud account the webhook is associated with. | #### Request parameters @@ -407,7 +410,7 @@ Update the configuration details for a specific webhook. #### Request sample ```shell -PUT https://cloud.getdbt.com/api/v3/accounts/{account_id}/webhooks/subscription/{webhook_id} +PUT https://{your access URL}/api/v3/accounts/{account_id}/webhooks/subscription/{webhook_id} ``` ```json @@ -429,6 +432,7 @@ PUT https://cloud.getdbt.com/api/v3/accounts/{account_id}/webhooks/subscription/ #### Path parameters | Name | Description | |------------|--------------------------------------| +| `your access URL` | The login URL for your dbt Cloud account. | | `account_id` | The dbt Cloud account the webhook is associated with. | | `webhook_id` | The webhook you want to update. | @@ -491,12 +495,13 @@ Test a specific webhook. #### Request ```shell -GET https://cloud.getdbt.com/api/v3/accounts/{account_id}/webhooks/subscription/{webhook_id}/test +GET https://{your access URL}/api/v3/accounts/{account_id}/webhooks/subscription/{webhook_id}/test ``` #### Path parameters | Name | Description | |------------|--------------------------------------| +| `your access URL` | The login URL for your dbt Cloud account. | | `account_id` | The dbt Cloud account the webhook is associated with. | | `webhook_id` | The webhook you want to test. | @@ -518,12 +523,13 @@ Delete a specific webhook. #### Request ```shell -DELETE https://cloud.getdbt.com/api/v3/accounts/{account_id}/webhooks/subscription/{webhook_id} +DELETE https://{your access URL}/api/v3/accounts/{account_id}/webhooks/subscription/{webhook_id} ``` #### Path parameters | Name | Description | |------------|--------------------------------------| +| `your access URL` | The login URL for your dbt Cloud account. | | `account_id` | The dbt Cloud account the webhook is associated with. | | `webhook_id` | The webhook you want to delete. | diff --git a/website/snippets/auth0-uri.md b/website/snippets/auth0-uri.md index 829aa310ba9..1187902f2e4 100644 --- a/website/snippets/auth0-uri.md +++ b/website/snippets/auth0-uri.md @@ -3,7 +3,8 @@ The URI used for SSO connections on multi-tenant dbt Cloud instances will vary b | Region | dbt Cloud Access URL | Auth0 SSO URI | Auth0 Entity ID * | |--------|-----------------------|-------------------------------|----------------------------------------| -| US | cloud.getdbt.com | auth.cloud.getdbt.com | us-production-mt | +| US multi-tenant | cloud.getdbt.com | auth.cloud.getdbt.com | us-production-mt | +| US cell 1 | {account prefix}.us1.dbt.com | auth.cloud.getdbt.com | us-production-mt | | EMEA | emea.dbt.com | auth.emea.dbt.com | emea-production-mt | | APAC | au.dbt.com | auth.au.dbt.com | au-production-mt | From 0c4c8cd0f464240a16d0e31e4335f69262526479 Mon Sep 17 00:00:00 2001 From: Jeremy Cohen Date: Thu, 31 Aug 2023 14:23:26 +0200 Subject: [PATCH 028/103] Clarify note about dbt-rpc deprecation --- website/docs/reference/commands/rpc.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/website/docs/reference/commands/rpc.md b/website/docs/reference/commands/rpc.md index a98799356ee..cf9fc57194f 100644 --- a/website/docs/reference/commands/rpc.md +++ b/website/docs/reference/commands/rpc.md @@ -12,16 +12,18 @@ description: "Remote Procedure Call (rpc) dbt server compiles and runs queries, -### Overview +:::caution Deprecation -You can use the `dbt-rpc` plugin to run a Remote Procedure Call (rpc) dbt server. This server compiles and runs queries in the context of a dbt project. Additionally, the RPC server provides methods that enable you to list and terminate running processes. We recommend running an rpc server from a directory containing a dbt project. The server will compile the project into memory, then accept requests to operate against that project's dbt context. +**The dbt-rpc plugin is deprecated.** -:::caution Deprecation -**The dbt-rpc plugin will be fully deprecated by the second half of 2023.** +dbt Labs actively maintained `dbt-rpc` for compatibility with dbt-core versions up to v1.5. Starting with dbt-core v1.6 (released in July 2023), `dbt-rpc` is no longer supported for ongoing compatibility. In the meantime, dbt Labs will be performing critical maintenance only for `dbt-rpc`, until the last compatible version of dbt-core has reached the end of official support (see [version policies](/docs/dbt-versions/core)). At that point, dbt Labs will archive this repository to be read-only. -dbt Labs is actively maintaining `dbt-rpc` up to dbt v1.4. Starting in v1.5, we intend to break `dbt-rpc` compatibility in favor of [the new dbt Server](https://github.com/dbt-labs/dbt-server). dbt Labs will perform critical maintenance only on `dbt-rpc`, until the last compatible version of dbt has reached the end of official support (thus 12 months after release of v1.4; [see Core version policies](/docs/dbt-versions/core)). ::: +### Overview + +You can use the `dbt-rpc` plugin to run a Remote Procedure Call (rpc) dbt server. This server compiles and runs queries in the context of a dbt project. Additionally, the RPC server provides methods that enable you to list and terminate running processes. We recommend running an rpc server from a directory containing a dbt project. The server will compile the project into memory, then accept requests to operate against that project's dbt context. + :::caution Running on Windows We do not recommend running the rpc server on Windows because of reliability issues. A Docker container may provide a useful workaround, if required. ::: From 734b78aa6c99ea0dededf410ad1a754968fd3ece Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 31 Aug 2023 11:15:54 -0400 Subject: [PATCH 029/103] Fixing text formatting on connection page --- .../connect-redshift-postgresql-alloydb.md | 24 ++++++++----------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md b/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md index 72fe9e0449c..dae0ee1d178 100644 --- a/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md +++ b/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md @@ -47,20 +47,16 @@ To configure the SSH tunnel in dbt Cloud, you'll need to provide the hostname/IP - Verify the bastion server has its network security rules set up to accept connections from the [dbt Cloud IP addresses](/docs/cloud/about-cloud/regions-ip-addresses) on whatever port you configured. - Set up the user account by using the bastion servers instance's CLI, The following example uses the username `dbtcloud:` - `sudo groupadd dbtcloud`
- - `sudo useradd -m -g dbtcloud dbtcloud`
- - `sudo su - dbtcloud`
- - `mkdir ~/.ssh`
- - `chmod 700 ~/.ssh`
- - `touch ~/.ssh/authorized_keys`
- - `chmod 600 ~/.ssh/authorized_keys`
- +```shell +sudo groupadd dbtcloud +sudo useradd -m -g dbtcloud dbtcloud +sudo su - dbtcloud +mkdir ~/.ssh +chmod 700 ~/.ssh +touch ~/.ssh/authorized_keys +chmod 600 ~/.ssh/authorized_keys +``` + - Copy and paste the dbt Cloud generated public key, into the authorized_keys file. The Bastion server should now be ready for dbt Cloud to use as a tunnel into the Redshift environment. From 8b5ba8607db8062dd4e808e7eb80e9eaf8e24428 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 31 Aug 2023 16:50:33 +0100 Subject: [PATCH 030/103] Update website/docs/docs/build/tests.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/docs/build/tests.md | 1 - 1 file changed, 1 deletion(-) diff --git a/website/docs/docs/build/tests.md b/website/docs/docs/build/tests.md index aff8b469b6b..2cc73847667 100644 --- a/website/docs/docs/build/tests.md +++ b/website/docs/docs/build/tests.md @@ -6,7 +6,6 @@ id: "tests" keywords: - test, tests, testing, dag --- -# Add tests to your DAG ## Related reference docs * [Test command](/reference/commands/test) * [Test properties](/reference/resource-properties/tests) From c62bd35af4d9c72b569c7becae7ec784dbd9bc39 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 31 Aug 2023 16:51:59 +0100 Subject: [PATCH 031/103] Update tests.md add weight for search --- website/docs/docs/build/tests.md | 1 + 1 file changed, 1 insertion(+) diff --git a/website/docs/docs/build/tests.md b/website/docs/docs/build/tests.md index 2cc73847667..c107dacf7b2 100644 --- a/website/docs/docs/build/tests.md +++ b/website/docs/docs/build/tests.md @@ -2,6 +2,7 @@ title: "Add tests to your DAG" sidebar_label: "Tests" description: "Read this tutorial to learn how to use tests when building in dbt." +search_weight: "heavy" id: "tests" keywords: - test, tests, testing, dag From 52e8201e7b11c2c12616ad53083ec2828652649f Mon Sep 17 00:00:00 2001 From: Anders Date: Thu, 31 Aug 2023 14:15:20 -0400 Subject: [PATCH 032/103] Update add-adapter-to-trusted-list.yml --- .github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml b/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml index 1706f1c0e6a..30e47c86567 100644 --- a/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml +++ b/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml @@ -41,11 +41,5 @@ body: - label: I am a maintainer of the adapter being submited for Trusted status - label: I have read both the [Trusted adapters](https://docs.getdbt.com/docs/trusted-adapters) and [Building a Trusted Adapter](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter) pages. - label: I believe that the adapter currently meets the expectations given above -- label: I will ensure this adapter stays in compliance with the guidelines + - label: I will ensure this adapter stays in compliance with the guidelines - label: I understand that dbt Labs reserves the right to remove an adapter from the trusted adapter list at any time, should any of the below guidelines not be met - validations: - required: true - - - - From 9dec3a3a440a34e86d0ef73cb5092eb5e503d855 Mon Sep 17 00:00:00 2001 From: Anders Date: Thu, 31 Aug 2023 14:18:00 -0400 Subject: [PATCH 033/103] Rename add-adapter-to-trusted-list.yml to zzz_add-adapter-to-trusted-list.yml --- ...er-to-trusted-list.yml => zzz_add-adapter-to-trusted-list.yml} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename .github/ISSUE_TEMPLATE/{add-adapter-to-trusted-list.yml => zzz_add-adapter-to-trusted-list.yml} (100%) diff --git a/.github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml b/.github/ISSUE_TEMPLATE/zzz_add-adapter-to-trusted-list.yml similarity index 100% rename from .github/ISSUE_TEMPLATE/add-adapter-to-trusted-list.yml rename to .github/ISSUE_TEMPLATE/zzz_add-adapter-to-trusted-list.yml From 6bd23db683a27b91b1231d6fc4056db22f8bfa1a Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 31 Aug 2023 14:52:47 -0400 Subject: [PATCH 034/103] Auto collapse categories when new one is selected --- website/docusaurus.config.js | 1 + 1 file changed, 1 insertion(+) diff --git a/website/docusaurus.config.js b/website/docusaurus.config.js index 24030624290..0cc6299ed39 100644 --- a/website/docusaurus.config.js +++ b/website/docusaurus.config.js @@ -51,6 +51,7 @@ var siteSettings = { docs:{ sidebar: { hideable: true, + autoCollapseCategories: true, }, }, image: "/img/avatar.png", From 16c5525267fd3c0d6bd7df32758b1a2a5d2c11e1 Mon Sep 17 00:00:00 2001 From: Ammar Chalifah <38188988+ammarchalifah@users.noreply.github.com> Date: Thu, 31 Aug 2023 22:00:35 +0300 Subject: [PATCH 035/103] Update databricks-configs.md to include liquid_clustered_by parameter --- website/docs/reference/resource-configs/databricks-configs.md | 1 + 1 file changed, 1 insertion(+) diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index 41b0bfcc5ea..5ec110fa30b 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -12,6 +12,7 @@ When materializing a model as `table`, you may include several optional configs | file_format | The file format to use when creating tables (`parquet`, `delta`, `hudi`, `csv`, `json`, `text`, `jdbc`, `orc`, `hive` or `libsvm`). | Optional | `delta`| | location_root | The created table uses the specified directory to store its data. The table alias is appended to it. | Optional | `/mnt/root` | | partition_by | Partition the created table by the specified columns. A directory is created for each partition. | Optional | `date_day` | +| liquid_clustered_by | Cluster the created table by the specified columns. Clustering method is based on [Delta's Liquid Clustering feature](https://docs.databricks.com/en/delta/clustering.html). | Optional | `date_day` | | clustered_by | Each partition in the created table will be split into a fixed number of buckets by the specified columns. | Optional | `country_code` | | buckets | The number of buckets to create while clustering | Required if `clustered_by` is specified | `8` | From c3a3eee281467f035923b0b5ca134bc23a994e37 Mon Sep 17 00:00:00 2001 From: Ammar Chalifah <38188988+ammarchalifah@users.noreply.github.com> Date: Thu, 31 Aug 2023 22:04:24 +0300 Subject: [PATCH 036/103] Add version information --- website/docs/reference/resource-configs/databricks-configs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index 5ec110fa30b..dc7f0cd53e3 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -12,7 +12,7 @@ When materializing a model as `table`, you may include several optional configs | file_format | The file format to use when creating tables (`parquet`, `delta`, `hudi`, `csv`, `json`, `text`, `jdbc`, `orc`, `hive` or `libsvm`). | Optional | `delta`| | location_root | The created table uses the specified directory to store its data. The table alias is appended to it. | Optional | `/mnt/root` | | partition_by | Partition the created table by the specified columns. A directory is created for each partition. | Optional | `date_day` | -| liquid_clustered_by | Cluster the created table by the specified columns. Clustering method is based on [Delta's Liquid Clustering feature](https://docs.databricks.com/en/delta/clustering.html). | Optional | `date_day` | +| liquid_clustered_by | Cluster the created table by the specified columns. Clustering method is based on [Delta's Liquid Clustering feature](https://docs.databricks.com/en/delta/clustering.html). Available since dbt-databricks 1.6.2. | Optional | `date_day` | | clustered_by | Each partition in the created table will be split into a fixed number of buckets by the specified columns. | Optional | `country_code` | | buckets | The number of buckets to create while clustering | Required if `clustered_by` is specified | `8` | From 489412c8ad101902c44cd5351d98e8cfdd63ca5f Mon Sep 17 00:00:00 2001 From: Ly Nguyen Date: Thu, 31 Aug 2023 13:13:12 -0700 Subject: [PATCH 037/103] Release note: changes to Discovery API endpoints --- .../deprecation-endpoints-discovery.md | 126 ++++++++++++++++++ 1 file changed, 126 insertions(+) create mode 100644 website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md diff --git a/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md b/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md new file mode 100644 index 00000000000..b821a5ae764 --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md @@ -0,0 +1,126 @@ +--- +title: "Query patterns and endpoints in the dbt Cloud Discovery API" +description: "August 2023: Learn about the upcoming deprecation of certain endpoints and query patterns in the Discovery API." +sidebar_position: 6 +sidebar_label: "Deprecation: Certain Dicovery API endpoints and query patterns" +tags: [Aug-2023, API] +date: 2023-08-31 +--- + +dbt Labs has deprecated and will be deprecating certain query patterns and replacing them with new conventions that will enhance the performance of the dbt Cloud [Discovery API](/docs/dbt-cloud-apis/discovery-api). + +All these changes will be in effect on _September 7, 2023_. + +We understand that these changes might require adjustments to your existing integration with the Discovery API. Please [contact us](mailto:support@getdbt.com) with any questions. We're here to help you during this transition period. + +## Job-based queries + +Job-based queries that use the data type `Int` for IDs will be deprecated. They will be marked as deprecated in the [GraphQL explorer](https://metadata.cloud.getdbt.com/graphql). The new convention will be for you to use the data type `BigInt` instead. + +This change will be in effect starting September 7, 2023. + + +Example of query before deprecation: + +```graphql +query ($jobId: Int!) { + models(jobId: $jobId){ + uniqueId + } +} +``` + +Example of query after deprecation: + +```graphql +query ($jobId: BigInt!) { + job(id: $jobId) { + models { + uniqueId + } + } +} +``` + +## modelByEnvironment queries + +The `modelByEnvironment` object will be renamed and is being moved into the `environment` object. This change is in effect starting August 15, 2023. + +Example of query before deprecation: + +```graphql +query ($environmentId: Int!, $uniqueId: String) { + modelByEnvironment(environmentId: $environmentId, uniqueId: $uniqueId) { + uniqueId + executionTime + executeCompletedAt + } +} +``` + +Example of query after deprecation: + +```graphql +query ($environmentId: BigInt!, $uniqueId: String) { + environment(id: $environmentId) { + applied { + modelHistoricalRuns(uniqueId: $uniqueId) { + uniqueId + executionTime + executeCompletedAt + } + } + } +} +``` + + +## Environment and account queries + +Environment and account queries that use `Int` as a data type for ID is deprecated. IDs now must be in `BigInt`. This change is in effect starting on August 15, 2023. + + +Example of query before deprecation: + +```graphql +query ($environmentId: Int!, $first: Int!) { + environment(id: $environmentId) { + applied { + models(first: $first) { + edges { + node { + uniqueId + executionInfo { + lastRunId + } + } + } + } + } + } +} +``` + + +Example of query after deprecation: + +```graphql +query ($environmentId: BigInt!, $first: Int!) { + environment(id: $environmentId) { + applied { + models(first: $first) { + edges { + node { + uniqueId + executionInfo { + lastRunId + } + } + } + } + } + } +} +``` + + From e85a2a61c892546be8bd922c88994d9df605b265 Mon Sep 17 00:00:00 2001 From: Ly Nguyen Date: Thu, 31 Aug 2023 13:41:08 -0700 Subject: [PATCH 038/103] Minor tweaks --- .../05-Aug-2023/deprecation-endpoints-discovery.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md b/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md index b821a5ae764..540d6d21b18 100644 --- a/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md +++ b/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md @@ -7,7 +7,7 @@ tags: [Aug-2023, API] date: 2023-08-31 --- -dbt Labs has deprecated and will be deprecating certain query patterns and replacing them with new conventions that will enhance the performance of the dbt Cloud [Discovery API](/docs/dbt-cloud-apis/discovery-api). +dbt Labs has deprecated and will be deprecating certain query patterns and replacing them with new conventions to enhance the performance of the dbt Cloud [Discovery API](/docs/dbt-cloud-apis/discovery-api). All these changes will be in effect on _September 7, 2023_. @@ -44,7 +44,7 @@ query ($jobId: BigInt!) { ## modelByEnvironment queries -The `modelByEnvironment` object will be renamed and is being moved into the `environment` object. This change is in effect starting August 15, 2023. +The `modelByEnvironment` object has been renamed and moved into the `environment` object. This change is in effect and has been since August 15, 2023. Example of query before deprecation: @@ -77,7 +77,7 @@ query ($environmentId: BigInt!, $uniqueId: String) { ## Environment and account queries -Environment and account queries that use `Int` as a data type for ID is deprecated. IDs now must be in `BigInt`. This change is in effect starting on August 15, 2023. +Environment and account queries that use `Int` as a data type for ID has been deprecated. IDs must now be in `BigInt`. This change is in effect and has been since August 15, 2023. Example of query before deprecation: From a048718c042e69f0f6a9650772fcac975d20e7fb Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 31 Aug 2023 17:08:33 -0400 Subject: [PATCH 039/103] Fixing correct page --- website/docs/docs/build/python-models.md | 6 + .../building-models/python-models.md | 719 ------------------ 2 files changed, 6 insertions(+), 719 deletions(-) delete mode 100644 website/docs/docs/building-a-dbt-project/building-models/python-models.md diff --git a/website/docs/docs/build/python-models.md b/website/docs/docs/build/python-models.md index 12825648501..bff65362d06 100644 --- a/website/docs/docs/build/python-models.md +++ b/website/docs/docs/build/python-models.md @@ -16,11 +16,15 @@ We encourage you to: dbt Python (`dbt-py`) models can help you solve use cases that can't be solved with SQL. You can perform analyses using tools available in the open-source Python ecosystem, including state-of-the-art packages for data science and statistics. Before, you would have needed separate infrastructure and orchestration to run Python transformations in production. Python transformations defined in dbt are models in your project with all the same capabilities around testing, documentation, and lineage. + Python models are supported in dbt Core 1.3 and higher. Learn more about [upgrading your version in dbt Cloud](https://docs.getdbt.com/docs/dbt-cloud/cloud-configuring-dbt-cloud/cloud-upgrading-dbt-versions) and [upgrading dbt Core versions](https://docs.getdbt.com/docs/core-versions#upgrading-to-new-patch-versions). To read more about Python models, change the [docs version to 1.3](/docs/build/python-models?version=1.3) (or higher) in the menu bar. + + + @@ -711,3 +715,5 @@ You can also install packages at cluster creation time by [defining cluster prop
+ + \ No newline at end of file diff --git a/website/docs/docs/building-a-dbt-project/building-models/python-models.md b/website/docs/docs/building-a-dbt-project/building-models/python-models.md deleted file mode 100644 index 9c1127bb9f2..00000000000 --- a/website/docs/docs/building-a-dbt-project/building-models/python-models.md +++ /dev/null @@ -1,719 +0,0 @@ ---- -title: "Python models" ---- - -:::info Brand new! - -dbt Core v1.3 included first-ever support for Python models. Note that only [specific data platforms](#specific-data-platforms) support dbt-py models. - -We encourage you to: -- Read [the original discussion](https://github.com/dbt-labs/dbt-core/discussions/5261) that proposed this feature. -- Contribute to [best practices for developing Python models in dbt](https://discourse.getdbt.com/t/dbt-python-model-dbt-py-best-practices/5204 ). -- Weigh in on [next steps for Python models, beyond v1.3](https://github.com/dbt-labs/dbt-core/discussions/5742). -- Join the **#dbt-core-python-models** channel in the [dbt Community Slack](https://www.getdbt.com/community/join-the-community/). - -Below, you'll see sections entitled "❓ **Our questions**." We are excited to have released a first narrow set of functionality in v1.3, which will solve real use cases. We also know this is a first step into a much wider field of possibility. We don't pretend to have all the answers. We're excited to keep developing our opinionated recommendations and next steps for product developmentβ€”and we want your help. Comment in the GitHub discussions; leave thoughts in Slack; bring up dbt + Python in casual conversation with colleagues and friends. -::: - -## About Python models in dbt - -dbt Python ("dbt-py") models will help you solve use cases that can't be solved with SQL. You can perform analyses using tools available in the open source Python ecosystem, including state-of-the-art packages for data science and statistics. Before, you would have needed separate infrastructure and orchestration to run Python transformations in production. By defining your Python transformations in dbt, they're just models in your project, with all the same capabilities around testing, documentation, and lineage. - - - -Python models are supported in dbt Core 1.3 and above. Learn more about [upgrading your version in dbt Cloud](https://docs.getdbt.com/docs/dbt-cloud/cloud-configuring-dbt-cloud/cloud-upgrading-dbt-versions) and [upgrading dbt Core versions](https://docs.getdbt.com/docs/core-versions#upgrading-to-new-patch-versions). - -To read more about Python models, change the docs version to 1.3 or higher in the menu above. - - - - - - - - -```python -import ... - -def model(dbt, session): - - my_sql_model_df = dbt.ref("my_sql_model") - - final_df = ... # stuff you can't write in SQL! - - return final_df -``` - - - - - -```yml -version: 2 - -models: - - name: my_python_model - - # Document within the same codebase - description: My transformation written in Python - - # Configure in ways that feel intuitive and familiar - config: - materialized: table - tags: ['python'] - - # Test the results of my Python transformation - columns: - - name: id - # Standard validation for 'grain' of Python results - tests: - - unique - - not_null - tests: - # Write your own validation logic (in SQL) for Python results - - [custom_generic_test](writing-custom-generic-tests) -``` - - - - - - -The prerequisites for dbt Python models include using an adapter for a data platform that supports a fully featured Python runtime. In a dbt Python model, all Python code is executed remotely on the platform. None of it is run by dbt locally. We believe in clearly separating _model definition_ from _model execution_. In this and many other ways, you'll find that dbt's approach to Python models mirrors its longstanding approach to modeling data in SQL. - -We've written this guide assuming that you have some familiarity with dbt. If you've never before written a dbt model, we encourage you to start by first reading [dbt Models](/docs/build/models). Throughout, we'll be drawing connections between Python models and SQL models, as well as making clear their differences. - -### What is a Python model? - -A dbt Python model is a function that reads in dbt sources or other models, applies a series of transformations, and returns a transformed dataset. DataFrame operations define the starting points, the end state, and each step along the way. - -This is similar to the role of CTEs in dbt SQL models. We use CTEs to pull in upstream datasets, define (and name) a series of meaningful transformations, and end with a final `select` statement. You can run the compiled version of a dbt SQL model to see the data included in the resulting view or table. When you `dbt run`, dbt wraps that query in `create view`, `create table`, or more complex DDL to save its results in the database. - -Instead of a final `select` statement, each Python model returns a final DataFrame. Each DataFrame operation is "lazily evaluated." In development, you can preview its data, using methods like `.show()` or `.head()`. When you run a Python model, the full result of the final DataFrame will be saved as a table in your data warehouse. - -dbt Python models have access to almost all of the same configuration options as SQL models. You can test them, document them, add `tags` and `meta` properties to them, grant access to their results to other users, and so on. You can select them by their name, their file path, their configurations, whether they are upstream or downstream of another model, or whether they have been modified compared to a previous project state. - -### Defining a Python model - -Each Python model lives in a `.py` file in your `models/` folder. It defines a function named **`model()`**, which takes two parameters: -- **`dbt`**: A class compiled by dbt Core, unique to each model, enables you to run your Python code in the context of your dbt project and DAG. -- **`session`**: A class representing your data platform’s connection to the Python backend. The session is needed to read in tables as DataFrames, and to write DataFrames back to tables. In PySpark, by convention, the `SparkSession` is named `spark`, and available globally. For consistency across platforms, we always pass it into the `model` function as an explicit argument called `session`. - -The `model()` function must return a single DataFrame. On Snowpark (Snowflake), this can be a Snowpark or pandas DataFrame. Via PySpark (Databricks + BigQuery), this can be a Spark, pandas, or pandas-on-Spark DataFrame. For more about choosing between pandas and native DataFrames, see [DataFrame API + syntax](#dataframe-api--syntax). - -When you `dbt run --select python_model`, dbt will prepare and pass in both arguments (`dbt` and `session`). All you have to do is define the function. This is how every single Python model should look: - - - -```python -def model(dbt, session): - - ... - - return final_df -``` - - - - -### Referencing other models - -Python models participate fully in dbt's directed acyclic graph (DAG) of transformations. Use the `dbt.ref()` method within a Python model to read in data from other models (SQL or Python). If you want to read directly from a raw source table, use `dbt.source()`. These methods return DataFrames pointing to the upstream source, model, seed, or snapshot. - - - -```python -def model(dbt, session): - - # DataFrame representing an upstream model - upstream_model = dbt.ref("upstream_model_name") - - # DataFrame representing an upstream source - upstream_source = dbt.source("upstream_source_name", "table_name") - - ... -``` - - - -Of course, you can `ref()` your Python model in downstream SQL models, too: - - - -```sql -with upstream_python_model as ( - - select * from {{ ref('my_python_model') }} - -), - -... -``` - - - -### Configuring Python models - -Just like SQL models, there are three ways to configure Python models: -1. In `dbt_project.yml`, where you can configure many models at once -2. In a dedicated `.yml` file, within the `models/` directory -3. Within the model's `.py` file, using the `dbt.config()` method - -Calling the `dbt.config()` method will set configurations for your model right within your `.py` file, similar to the `{{ config() }}` macro in `.sql` model files: - - - -```python -def model(dbt, session): - - # setting configuration - dbt.config(materialized="table") -``` - - - -There's a limit to how fancy you can get with the `dbt.config()` method. It accepts _only_ literal values (strings, booleans, and numeric types). Passing another function or a more complex data structure is not possible. The reason is that dbt statically analyzes the arguments to `config()` while parsing your model without executing your Python code. If you need to set a more complex configuration, we recommend you define it using the [`config` property](resource-properties/config) in a YAML file. - -#### Accessing project context - -dbt Python models don't use Jinja to render compiled code. Python models have limited access to global project contexts compared to SQL models. That context is made available from the `dbt` class, passed in as an argument to the `model()` function. - -Out of the box, the `dbt` class supports: -- Returning DataFrames referencing the locations of other resources: `dbt.ref()` + `dbt.source()` -- Accessing the database location of the current model: `dbt.this()` (also: `dbt.this.database`, `.schema`, `.identifier`) -- Determining if the current model's run is incremental: `dbt.is_incremental` - -It is possible to extend this context by "getting" them via `dbt.config.get()` after they are configured in the [model's config](/reference/model-configs). This includes inputs such as `var`, `env_var`, and `target`. If you want to use those values to power conditional logic in your model, we require setting them through a dedicated `.yml` file config: - - - -```yml -version: 2 - -models: - - name: my_python_model - config: - materialized: table - target_name: "{{ target.name }}" - specific_var: "{{ var('SPECIFIC_VAR') }}" - specific_env_var: "{{ env_var('SPECIFIC_ENV_VAR') }}" -``` - - - -Then, within the model's Python code, use the `dbt.config.get()` function to _access_ values of configurations that have been set: - - - -```python -def model(dbt, session): - target_name = dbt.config.get("target_name") - specific_var = dbt.config.get("specific_var") - specific_env_var = dbt.config.get("specific_env_var") - - orders_df = dbt.ref("fct_orders") - - # limit data in dev - if target_name == "dev": - orders_df = orders_df.limit(500) -``` - - - -### Materializations - -Python models support two materializations: -- `table` -- `incremental` - -Incremental Python models support all the same [incremental strategies](/docs/build/incremental-models#about-incremental_strategy) as their SQL counterparts. The specific strategies supported depend on your adapter. - -Python models can't be materialized as `view` or `ephemeral`. Python isn't supported for non-model resource types (like tests and snapshots). - -For incremental models, like SQL models, you will need to filter incoming tables to only new rows of data: - - - -
- - - -```python -import snowflake.snowpark.functions as F - -def model(dbt, session): - dbt.config( - materialized = "incremental", - unique_key = "id", - ) - df = dbt.ref("upstream_table") - - if dbt.is_incremental: - - # only new rows compared to max in current table - max_from_this = f"select max(updated_at) from {dbt.this}" - df = df.filter(df.updated_at > session.sql(max_from_this).collect()[0][0]) - - # or only rows from the past 3 days - df = df.filter(df.updated_at >= F.dateadd("day", F.lit(-3), F.current_timestamp())) - - ... - - return df -``` - - - -
- -
- - - -```python -import pyspark.sql.functions as F - -def model(dbt, session): - dbt.config( - materialized = "incremental", - unique_key = "id", - ) - df = dbt.ref("upstream_table") - - if dbt.is_incremental: - - # only new rows compared to max in current table - max_from_this = f"select max(updated_at) from {dbt.this}" - df = df.filter(df.updated_at > session.sql(max_from_this).collect()[0][0]) - - # or only rows from the past 3 days - df = df.filter(df.updated_at >= F.date_add(F.current_timestamp(), F.lit(-3))) - - ... - - return df -``` - - - -
- -
- -**Note:** Incremental models are supported on BigQuery/Dataproc for the `merge` incremental strategy. The `insert_overwrite` strategy is not yet supported. - -## Python-specific functionality - -### Defining functions - -In addition to defining a `model` function, the Python model can import other functions or define its own. Here's an example, on Snowpark, defining a custom `add_one` function: - - - -```python -def add_one(x): - return x + 1 - -def model(dbt, session): - dbt.config(materialized="table") - temps_df = dbt.ref("temperatures") - - # warm things up just a little - df = temps_df.withColumn("degree_plus_one", add_one(temps_df["degree"])) - return df -``` - - - -At present, Python functions defined in one dbt model can't be imported and reused in other models. See the ["Code reuse"](#code-reuse) section for the potential patterns we're considering. - -### Using PyPI packages - -You can also define functions that depend on third-party packages, so long as those packages are installed and available to the Python runtime on your data platform. See notes on "Installing Packages" for [specific data warehouses](#specific-data-warehouses). - -In this example, we use the `holidays` package to determine if a given date is a holiday in France. For simplicity and consistency across platforms, the code below uses the pandas API. The exact syntax, and the need to refactor for multi-node processing, still varies. - - - -
- - - -```python -import holidays - -def is_holiday(date_col): - # Chez Jaffle - french_holidays = holidays.France() - is_holiday = (date_col in french_holidays) - return is_holiday - -def model(dbt, session): - dbt.config( - materialized = "table", - packages = ["holidays"] - ) - - orders_df = dbt.ref("stg_orders") - - df = orders_df.to_pandas() - - # apply our function - # (columns need to be in uppercase on Snowpark) - df["IS_HOLIDAY"] = df["ORDER_DATE"].apply(is_holiday) - - # return final dataset (Pandas DataFrame) - return df -``` - - - -
- -
- - - -```python -import holidays - -def is_holiday(date_col): - # Chez Jaffle - french_holidays = holidays.France() - is_holiday = (date_col in french_holidays) - return is_holiday - -def model(dbt, session): - dbt.config( - materialized = "table", - packages = ["holidays"] - ) - - orders_df = dbt.ref("stg_orders") - - df = orders_df.to_pandas_on_spark() # Spark 3.2+ - # df = orders_df.toPandas() in earlier versions - - # apply our function - df["is_holiday"] = df["order_date"].apply(is_holiday) - - # convert back to PySpark - df = df.to_spark() # Spark 3.2+ - # df = session.createDataFrame(df) in earlier versions - - # return final dataset (PySpark DataFrame) - return df -``` - - - -
- -
- -#### Configuring packages - -We encourage you to explicitly configure required packages and versions so dbt can track them in project metadata. This configuration is required for the implementation on some platforms. If you need specific versions of packages, specify them. - - - -```python -def model(dbt, session): - dbt.config( - packages = ["numpy==1.23.1", "scikit-learn"] - ) -``` - - - - - -```yml -version: 2 - -models: - - name: my_python_model - config: - packages: - - "numpy==1.23.1" - - scikit-learn -``` - - - -#### UDFs - -You can use the `@udf` decorator or `udf` function to define an "anonymous" function and call it within your `model` function's DataFrame transformation. This is a typical pattern for applying more complex functions as DataFrame operations, especially if those functions require inputs from third-party packages. -- [Snowpark Python: Creating UDFs](https://docs.snowflake.com/en/developer-guide/snowpark/python/creating-udfs.html) -- [PySpark functions: udf](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.udf.html) - - - -
- - - -```python -import snowflake.snowpark.types as T -import snowflake.snowpark.functions as F -import numpy - -def register_udf_add_random(): - add_random = F.udf( - # use 'lambda' syntax, for simple functional behavior - lambda x: x + numpy.random.normal(), - return_type=T.FloatType(), - input_types=[T.FloatType()] - ) - return add_random - -def model(dbt, session): - - dbt.config( - materialized = "table", - packages = ["numpy"] - ) - - temps_df = dbt.ref("temperatures") - - add_random = register_udf_add_random() - - # warm things up, who knows by how much - df = temps_df.withColumn("degree_plus_random", add_random("degree")) - return df -``` - - - -**Note:** Due to a Snowpark limitation, it is not currently possible to register complex named UDFs within stored procedures, and therefore dbt Python models. We are looking to add native support for Python UDFs as a project/DAG resource type in a future release. For the time being, if you want to create a "vectorized" Python UDF via the Batch API, we recommend either: -- Writing [`create function`](https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-batch.html) inside a SQL macro, to run as a hook or run-operation -- [Registering from a staged file](https://docs.snowflake.com/ko/developer-guide/snowpark/reference/python/_autosummary/snowflake.snowpark.udf.html#snowflake.snowpark.udf.UDFRegistration.register_from_file) within your Python model code - -
- -
- - - -```python -from pyspark.sql.types as T -import pyspark.sql.functions as F -import numpy - -# use a 'decorator' for more readable code -@F.udf(returnType=T.DoubleType()) -def add_random(x): - random_number = numpy.random.normal() - return x + random_number - -def model(dbt, session): - dbt.config( - materialized = "table", - packages = ["numpy"] - ) - - temps_df = dbt.ref("temperatures") - - # warm things up, who knows by how much - df = temps_df.withColumn("degree_plus_random", add_random("degree")) - return df -``` - - - -
- -
- -#### Code reuse - -Currently, you cannot import or reuse Python functions defined in one dbt model, in other models. This is something we'd like dbt to support. There are two patterns we're considering: -1. Creating and registering **"named" UDFs**. This process is different across data platforms and has some performance limitations. (Snowpark does support ["vectorized" UDFs](https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-batch.html): pandas-like functions that you can execute in parallel.) -2. Using **private Python packages**. In addition to importing reusable functions from public PyPI packages, many data platforms support uploading custom Python assets and registering them as packages. The upload process looks different across platforms, but your code’s actual `import` looks the same. - -:::note ❓ Our questions - -- Should dbt have a role in abstracting over UDFs? Should dbt support a new type of DAG node, `function`? Would the primary use case be code reuse across Python models or defining Python-language functions that can be called from SQL models? -- How can dbt help users when uploading or initializing private Python assets? Is this a new form of `dbt deps`? -- How can dbt support users who want to test custom functions? If defined as UDFs: "unit testing" in the database? If "pure" functions in packages: encourage adoption of `pytest`? - -πŸ’¬ Discussion: ["Python models: package, artifact/object storage, and UDF management in dbt"](https://github.com/dbt-labs/dbt-core/discussions/5741) -::: - -### DataFrame API and syntax - -Over the past decade, most people writing data transformations in Python have adopted DataFrame as their common abstraction. dbt follows this convention by returning `ref()` and `source()` as DataFrames, and it expects all Python models to return a DataFrame. - -A DataFrame is a two-dimensional data structure (rows and columns). It supports convenient methods for transforming that data, creating new columns from calculations performed on existing columns. It also offers convenient ways for previewing data while developing locally or in a notebook. - -That's about where the agreement ends. There are numerous frameworks with their own syntaxes and APIs for DataFrames. The [pandas](https://pandas.pydata.org/docs/) library offered one of the original DataFrame APIs, and its syntax is the most common to learn for new data professionals. Most newer DataFrame APIs are compatible with pandas-style syntax, though few can offer perfect interoperability. This is true for Snowpark and PySpark, which have their own DataFrame APIs. - -When developing a Python model, you will find yourself asking these questions: - -**Why pandas?** It's the most common API for DataFrames. It makes it easy to explore sampled data and develop transformations locally. You can β€œpromote” your code as-is into dbt models and run it in production for small datasets. - -**Why _not_ pandas?** Performance. pandas runs "single-node" transformations, which cannot benefit from the parallelism and distributed computing offered by modern data warehouses. This quickly becomes a problem as you operate on larger datasets. Some data platforms support optimizations for code written using pandas' DataFrame API, preventing the need for major refactors. For example, ["pandas on PySpark"](https://spark.apache.org/docs/latest/api/python/getting_started/quickstart_ps.html) offers support for 95% of pandas functionality, using the same API while still leveraging parallel processing. - -:::note ❓ Our questions -- When developing a new dbt Python model, should we recommend pandas-style syntax for rapid iteration and then refactor? -- Which open source libraries provide compelling abstractions across different data engines and vendor-specific APIs? -- Should dbt attempt to play a longer-term role in standardizing across them? - -πŸ’¬ Discussion: ["Python models: the pandas problem (and a possible solution)"](https://github.com/dbt-labs/dbt-core/discussions/5738) -::: - -### Limitations - -Python models have capabilities that SQL models do not. They also have some drawbacks compared to SQL models: - -- **Time and cost.** Python models are slower to run than SQL models, and the cloud resources that run them can be more expensive. Running Python requires more general-purpose compute. That compute might sometimes live on a separate service or architecture from your SQL models. **However:** We believe that deploying Python models via dbtβ€”with unified lineage, testing, and documentationβ€”is, from a human standpoint, **dramatically** faster and cheaper. By comparison, spinning up separate infrastructure to orchestrate Python transformations in production and different tooling to integrate with dbt is much more time-consuming and expensive. -- **Syntax differences** are even more pronounced. Over the years, dbt has done a lot, via dispatch patterns and packages such as `dbt_utils`, to abstract over differences in SQL dialects across popular data warehouses. Python offers a **much** wider field of play. If there are five ways to do something in SQL, there are 500 ways to write it in Python, all with varying performance and adherence to standards. Those options can be overwhelming. As the maintainers of dbt, we will be learning from state-of-the-art projects tackling this problem and sharing guidance as we develop it. -- **These capabilities are very new.** As data warehouses develop new features, we expect them to offer cheaper, faster, and more intuitive mechanisms for deploying Python transformations. **We reserve the right to change the underlying implementation for executing Python models in future releases.** Our commitment to you is around the code in your model `.py` files, following the documented capabilities and guidance we're providing here. - -As a general rule, if there's a transformation you could write equally well in SQL or Python, we believe that well-written SQL is preferable: it's more accessible to a greater number of colleagues, and it's easier to write code that's performant at scale. If there's a transformation you _can't_ write in SQL, or where ten lines of elegant and well-annotated Python could save you 1000 lines of hard-to-read Jinja-SQL, Python is the way to go. - -## Specific data platforms - -In their initial launch, Python models are supported on three of the most popular data platforms: Snowflake, Databricks, and BigQuery/GCP (via Dataproc). Both Databricks and GCP's Dataproc use PySpark as the processing framework. Snowflake uses its own framework, Snowpark, which has many similarities to PySpark. - - - -
- -**Additional setup:** You will need to [acknowledge and accept Snowflake Third Party Terms](https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-packages.html#getting-started) to use Anaconda packages. - -**Installing packages:** Snowpark supports several popular packages via Anaconda. The complete list is at https://repo.anaconda.com/pkgs/snowflake/. Packages are installed at the time your model is being run. Different models can have different package dependencies. If you are using third-party packages, Snowflake recommends using a dedicated virtual warehouse for best performance rather than one with many concurrent users. - -**About "sprocs":** dbt submits Python models to run as "stored procedures," which some people call "sprocs" for short. By default, dbt will create a named sproc containing your model's compiled Python code, and then "call" it to execute. Snowpark has a Private Preview feature for "temporary" or "anonymous" stored procedures ([docs](https://docs.snowflake.com/en/LIMITEDACCESS/call-with.html)), which are faster and leave a cleaner query history. If this feature is enabled for your account, you can switch it on for your models by configuring `use_anonymous_sproc: True`. We plan to switch this on for all dbt + Snowpark Python models in a future release. - - - -```yml -# I asked Snowflake Support to enable this Private Preview feature, -# and now my dbt-py models run even faster! -models: - use_anonymous_sproc: True -``` - - - -**Docs:** ["Developer Guide: Snowpark Python"](https://docs.snowflake.com/en/developer-guide/snowpark/python/index.html) - -
- -
- -**Submission methods:** Databricks supports a few different mechanisms to submit PySpark code, each with relative advantages. Some are better for supporting iterative development, while others are better for supporting lower-cost production deployments. The options are: -- `all_purpose_cluster` (default): dbt will run your Python model using the cluster ID configured as `cluster` in your connection profile or for this specific model. These clusters are more expensive but also much more responsive. We recommend using an interactive all-purpose cluster for quicker iteration in development. - - `create_notebook: True`: dbt will upload your model's compiled PySpark code to a notebook in the namespace `/Shared/dbt_python_model/{schema}`, where `{schema}` is the configured schema for the model, and execute that notebook to run using the all-purpose cluster. The appeal of this approach is that you can easily open the notebook in the Databricks UI for debugging or fine-tuning right after running your model. Remember to copy any changes into your dbt `.py` model code before re-running. - - `create_notebook: False` (default): dbt will use the [Command API](https://docs.databricks.com/dev-tools/api/1.2/index.html#run-a-command), which is slightly faster. -- `job_cluster`: dbt will upload your model's compiled PySpark code to a notebook in the namespace `/Shared/dbt_python_model/{schema}`, where `{schema}` is the configured schema for the model, and execute that notebook to run using a short-lived jobs cluster. For each Python model, Databricks will need to spin up the cluster, execute the model's PySpark transformation, and then spin down the cluster. As such, job clusters take longer before and after model execution, but they're also less expensive, so we recommend these for longer-running Python models in production. To use the `job_cluster` submission method, your model must be configured with `job_cluster_config`, which defines key-value properties for `new_cluster`, as defined in the [JobRunsSubmit API](https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunsSubmit). - -You can configure each model's `submission_method` in all the standard ways you supply configuration: - -```python -def model(dbt, session): - dbt.config( - submission_method="all_purpose_cluster", - create_notebook=True, - cluster_id="abcd-1234-wxyz" - ) - ... -``` -```yml -version: 2 -models: - - name: my_python_model - config: - submission_method: job_cluster - job_cluster_config: - spark_version: ... - node_type_id: ... -``` -```yml -# dbt_project.yml -models: - project_name: - subfolder: - # set defaults for all .py models defined in this subfolder - +submission_method: all_purpose_cluster - +create_notebook: False - +cluster_id: abcd-1234-wxyz -``` - -If not configured, `dbt-spark` will use the built-in defaults: the all-purpose cluster (based on `cluster` in your connection profile) without creating a notebook. The `dbt-databricks` adapter will default to the cluster configured in `http_path`. We encourage explicitly configuring the clusters for Python models in Databricks projects. - -**Installing packages:** When using all-purpose clusters, we recommend installing packages which you will be using to run your Python models. - -**Docs:** -- [PySpark DataFrame syntax](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.html) -- [Databricks: Introduction to DataFrames - Python](https://docs.databricks.com/spark/latest/dataframes-datasets/introduction-to-dataframes-python.html) - -
- -
- -The `dbt-bigquery` adapter uses a service called Dataproc to submit your Python models as PySpark jobs. That Python/PySpark code will read from your tables and views in BigQuery, perform all computation in Dataproc, and write the final result back to BigQuery. - -**Submission methods.** Dataproc supports two submission methods: `serverless` and `cluster`. Dataproc Serverless does not require a ready cluster, which saves on hassle and costβ€”but it is slower to start up, and much more limited in terms of available configuration. For example, Dataproc Serverless supports only a small set of Python packages, though it does include `pandas`, `numpy`, and `scikit-learn`. (See the full list [here](https://cloud.google.com/dataproc-serverless/docs/guides/custom-containers#example_custom_container_image_build), under "The following packages are installed in the default image"). Whereas, by creating a Dataproc Cluster in advance, you can fine-tune the cluster's configuration, install any PyPI packages you want, and benefit from faster, more responsive runtimes. - -Use the `cluster` submission method with dedicated Dataproc clusters you or your organization manage. Use the `serverless` submission method to avoid managing a Spark cluster. The latter may be quicker for getting started, but both are valid for production. - -**Additional setup:** -- Create or use an existing [Cloud Storage bucket](https://cloud.google.com/storage/docs/creating-buckets) -- Enable Dataproc APIs for your project + region -- If using the `cluster` submission method: Create or use an existing [Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) with the [Spark BigQuery connector initialization action](https://github.com/GoogleCloudDataproc/initialization-actions/tree/master/connectors#bigquery-connectors). (Google recommends copying the action into your own Cloud Storage bucket, rather than using the example version shown in the screenshot below.) - - - -The following configurations are needed to run Python models on Dataproc. You can add these to your [BigQuery profile](/reference/warehouse-setups/bigquery-setup#running-python-models-on-dataproc), or configure them on specific Python models: -- `gcs_bucket`: Storage bucket to which dbt will upload your model's compiled PySpark code. -- `dataproc_region`: GCP region in which you have enabled Dataproc (for example `us-central1`) -- `dataproc_cluster_name`: Name of Dataproc cluster to use for running Python model (executing PySpark job). Only required if `submission_method: cluster`. - -```python -def model(dbt, session): - dbt.config( - submission_method="cluster", - dataproc_cluster_name="my-favorite-cluster" - ) - ... -``` -```yml -version: 2 -models: - - name: my_python_model - config: - submission_method: serverless -``` - -Any user or service account that runs dbt Python models will need the following permissions, in addition to permissions needed for BigQuery ([docs](https://cloud.google.com/dataproc/docs/concepts/iam/iam)): -``` -dataproc.clusters.use -dataproc.jobs.create -dataproc.jobs.get -dataproc.operations.get -storage.buckets.get -storage.objects.create -storage.objects.delete -``` - -**Installing packages:** If you are using a Dataproc Cluster (as opposed to Dataproc Serverless), you can add third-party packages while creating the cluster. - -Google recommends installing Python packages on Dataproc clusters via initialization actions: -- [How initialization actions are used](https://github.com/GoogleCloudDataproc/initialization-actions/blob/master/README.md#how-initialization-actions-are-used) -- [Actions for installing via `pip` or `conda`](https://github.com/GoogleCloudDataproc/initialization-actions/tree/master/python) - -You can also install packages at cluster creation time by [defining cluster properties](https://cloud.google.com/dataproc/docs/tutorials/python-configuration#image_version_20): `dataproc:pip.packages` or `dataproc:conda.packages`. - - - -**Docs:** -- [Dataproc overview](https://cloud.google.com/dataproc/docs/concepts/overview) -- [PySpark DataFrame syntax](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.html) - -
- -
- -
From 84fe4a912b637903181ccc22464969e7281c5c9a Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Thu, 31 Aug 2023 15:12:18 -0700 Subject: [PATCH 040/103] Update website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md follow convention for sidebar label --- .../05-Aug-2023/deprecation-endpoints-discovery.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md b/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md index 540d6d21b18..d53f892a1ba 100644 --- a/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md +++ b/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md @@ -1,5 +1,5 @@ --- -title: "Query patterns and endpoints in the dbt Cloud Discovery API" +title: "Deprecation: Query patterns and endpoints in the dbt Cloud Discovery API" description: "August 2023: Learn about the upcoming deprecation of certain endpoints and query patterns in the Discovery API." sidebar_position: 6 sidebar_label: "Deprecation: Certain Dicovery API endpoints and query patterns" From 64c55cb7fda06dc5fbfbf229f6e9679d6bdf6eea Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 1 Sep 2023 11:48:00 +0100 Subject: [PATCH 041/103] Update deprecation-endpoints-discovery.md fix typo --- .../05-Aug-2023/deprecation-endpoints-discovery.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md b/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md index d53f892a1ba..cd088b92fab 100644 --- a/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md +++ b/website/docs/docs/dbt-versions/release-notes/05-Aug-2023/deprecation-endpoints-discovery.md @@ -2,7 +2,7 @@ title: "Deprecation: Query patterns and endpoints in the dbt Cloud Discovery API" description: "August 2023: Learn about the upcoming deprecation of certain endpoints and query patterns in the Discovery API." sidebar_position: 6 -sidebar_label: "Deprecation: Certain Dicovery API endpoints and query patterns" +sidebar_label: "Deprecation: Certain Discovery API endpoints and query patterns" tags: [Aug-2023, API] date: 2023-08-31 --- @@ -77,7 +77,7 @@ query ($environmentId: BigInt!, $uniqueId: String) { ## Environment and account queries -Environment and account queries that use `Int` as a data type for ID has been deprecated. IDs must now be in `BigInt`. This change is in effect and has been since August 15, 2023. +Environment and account queries that use `Int` as a data type for ID have been deprecated. IDs must now be in `BigInt`. This change is in effect and has been since August 15, 2023. Example of query before deprecation: From f45ed3b56bede368ccd7a7ed15f23f21b4e9fc8d Mon Sep 17 00:00:00 2001 From: Boris Sorochkin Date: Fri, 1 Sep 2023 15:48:21 +0300 Subject: [PATCH 042/103] UPSOLVER: Documentation for features in v1.5.27 of dbt-upsolver --- .../resource-configs/upsolver-configs.md | 205 +++++++++++------- 1 file changed, 128 insertions(+), 77 deletions(-) diff --git a/website/docs/reference/resource-configs/upsolver-configs.md b/website/docs/reference/resource-configs/upsolver-configs.md index c50e49e877f..b917ee2cc58 100644 --- a/website/docs/reference/resource-configs/upsolver-configs.md +++ b/website/docs/reference/resource-configs/upsolver-configs.md @@ -4,9 +4,9 @@ id: "upsolver-configs" description: "Upsolver Configurations - Read this in-depth guide to learn about configurations in dbt." --- -## Supported Upsolver SQLake functionality: +## Supported Upsolver SQLake functionality -| Command | State | Materialized | +| COMMAND | STATE | MATERIALIZED | | ------ | ------ | ------ | | SQL compute cluster| not supported | - | | SQL connections| supported | connection | @@ -14,7 +14,7 @@ description: "Upsolver Configurations - Read this in-depth guide to learn about | SQL merge job | supported | incremental | | SQL insert job | supported | incremental | | SQL materialized views | supported | materializedview | - +| Expectations | supported | incremental | ## Configs materialization @@ -24,10 +24,12 @@ description: "Upsolver Configurations - Read this in-depth guide to learn about | connection_options | Yes | connection | Dictionary of options supported by selected connection | connection_options={ 'aws_role': 'aws_role', 'external_id': 'SAMPLES', 'read_only': True } | | incremental_strategy | No | incremental | Define one of incremental strategies: merge/copy/insert. Default: copy | incremental_strategy='merge' | | source | No | incremental | Define source to copy from: S3/KAFKA/KINESIS | source = 'S3' | -| target_type | No | incremental | Define supported target to copy into. Default: copy into a table created in a metastore connection | target_type='Snowflake' | -| target_schema | Yes/No | incremental | Define target schema. Required if target_type not table created in a metastore connection | target_schema = 'your_schema' | -| target_connection | Yes/No | incremental | Define target connection. Required if target_type not table created in a metastore connection | target_connection = 'your_snowflake_connection' | -| target_table_alias | Yes/No | incremental | Define target table. Required if target_type not table created in a metastore connection | target_table_alias = 'target_table' | +| target_type | No | incremental | Define target type REDSHIFT/ELASTICSEARCH/S3/SNOWFLAKE/POSTGRES. Default None for Data lake | target_type='Snowflake' | +| target_prefix | False | incremental | Define PREFIX for ELASTICSEARCH target type | target_prefix = 'orders' | +| target_location | False | incremental | Define LOCATION for S3 target type | target_location = 's3://your-bucket-name/path/to/folder/' | +| schema | Yes/No | incremental | Define target schema. Required if target_type, no table created in a metastore connection | schema = 'target_schema' | +| database | Yes/No | incremental | Define target connection. Required if target_type, no table created in a metastore connection | database = 'target_connection' | +| alias | Yes/No | incremental | Define target table. Required if target_type, no table created in a metastore connection | alias = 'target_table' | | delete_condition | No | incremental | Records that match the ON condition and a delete condition can be deleted | delete_condition='nettotal > 1000' | | partition_by | No | incremental | List of dictionaries to define partition_by for target metastore table | partition_by=[{'field':'$field_name'}] | | primary_key | No | incremental | List of dictionaries to define partition_by for target metastore table | primary_key=[{'field':'customer_email', 'type':'string'}] | @@ -35,8 +37,7 @@ description: "Upsolver Configurations - Read this in-depth guide to learn about | sync | No | incremental/materializedview | Boolean option to define if job is synchronized or non-msynchronized. Default: False | sync=True | | options | No | incremental/materializedview | Dictionary of job options | options={ 'START_FROM': 'BEGINNING', 'ADD_MISSING_COLUMNS': True } | - -## SQL connection options +## SQL connection Connections are used to provide Upsolver with the proper credentials to bring your data into SQLake as well as to write out your transformed data to various services. More details on ["Upsolver SQL connections"](https://docs.upsolver.com/sqlake/sql-command-reference/sql-connections) As a dbt model connection is a model with materialized='connection' @@ -52,26 +53,26 @@ As a dbt model connection is a model with materialized='connection' Running this model will compile CREATE CONNECTION(or ALTER CONNECTION if exists) SQL and send it to Upsolver engine. Name of the connection will be name of the model. - ## SQL copy job A COPY FROM job allows you to copy your data from a given source into a table created in a metastore connection. This table then serves as your staging table and can be used with SQLake transformation jobs to write to various target locations. More details on ["Upsolver SQL copy-from"](https://docs.upsolver.com/sqlake/sql-command-reference/sql-jobs/create-job/copy-from) As a dbt model copy job is model with materialized='incremental' + ```sql {{ config( materialized='incremental', sync=True|False, source = 'S3'| 'KAFKA' | ... , - options={ - 'option_name': 'option_value' + options={ + 'option_name': 'option_value' }, - partition_by=[{}] - ) + partition_by=[{}] + ) }} SELECT * FROM {{ ref() }} ``` -Running this model will compile CREATE TABLE SQL(or ALTER TABLE if exists) and CREATE COPY JOB(or ALTER COPY JOB if exists) SQL and send it to Upsolver engine. Name of the table will be name of the model. Name of the job will be the name of the model plus '_job' +Running this model will compile CREATE TABLE SQL for target type Data lake (or ALTER TABLE if exists) and CREATE COPY JOB(or ALTER COPY JOB if exists) SQL and send it to Upsolver engine. Name of the table will be name of the model. Name of the job will be name of the model plus '_job' ## SQL insert job @@ -85,7 +86,7 @@ As a dbt model insert job is model with materialized='incremental' and increment map_columns_by_name=True|False, incremental_strategy='insert', options={ - 'option_name': 'option_value' + 'option_name': 'option_value' }, primary_key=[{}] ) @@ -97,8 +98,7 @@ GROUP BY ... HAVING COUNT(DISTINCT orderid::string) ... ``` -Running this model will compile CREATE TABLE SQL(or ALTER TABLE if exists) and CREATE INSERT JOB(or ALTER INSERT JOB if exists) SQL and send it to Upsolver engine. Name of the table will be name of the model. Name of the job will be the name of the model plus '_job' - +Running this model will compile CREATE TABLE SQL for target type Data lake(or ALTER TABLE if exists) and CREATE INSERT JOB(or ALTER INSERT JOB if exists) SQL and send it to Upsolver engine. Name of the table will be name of the model. Name of the job will be name of the model plus '_job' ## SQL merge job @@ -112,7 +112,7 @@ As a dbt model merge job is model with materialized='incremental' and incrementa map_columns_by_name=True|False, incremental_strategy='merge', options={ - 'option_name': 'option_value' + 'option_name': 'option_value' }, primary_key=[{}] ) @@ -124,14 +124,14 @@ GROUP BY ... HAVING COUNT ... ``` -Running this model will compile CREATE TABLE SQL(or ALTER TABLE if exists) and CREATE MERGE JOB(or ALTER MERGE JOB if exists) SQL and send it to Upsolver engine. Name of the table will be name of the model. Name of the job will be the name of the model plus '_job' +Running this model will compile CREATE TABLE SQL for target type Data lake(or ALTER TABLE if exists) and CREATE MERGE JOB(or ALTER MERGE JOB if exists) SQL and send it to Upsolver engine. Name of the table will be name of the model. Name of the job will be name of the model plus '_job' ## SQL materialized views When transforming your data, you may find that you need data from multiple source tables in order to achieve your desired result. In such a case, you can create a materialized view from one SQLake table in order to join it with your other table (which in this case is considered the main table). More details on ["Upsolver SQL materialized views"](https://docs.upsolver.com/sqlake/sql-command-reference/sql-jobs/create-job/sql-transformation-jobs/sql-materialized-views). -As a dbt model materialized views are models with materialized='materializedview'. +As a dbt model materialized views is model with materialized='materializedview'. ```sql {{ config( materialized='materializedview', @@ -145,9 +145,9 @@ WHERE ... GROUP BY ... ``` -Running this model will compile CREATE MATERIALIZED VIEW SQL(or ALTER MATERIALIZED VIEW if exists) and send it to Upsolver engine. Name of the materializedview will be the name of the model. +Running this model will compile CREATE MATERIALIZED VIEW SQL(or ALTER MATERIALIZED VIEW if exists) and send it to Upsolver engine. Name of the materializedview will be name of the model. -## Expectations and constraints +## Expectations/constraints Data quality conditions can be added to your job to drop a row or trigger a warning when a column violates a predefined condition. @@ -169,7 +169,7 @@ models: # model-level constraints constraints: - type: check - columns: [`''`, `''`] + columns: ['', ''] expression: "column1 <= column2" name: - type: not_null @@ -190,7 +190,7 @@ models: ## Projects examples -> Refer to the projects examples link: [github.com/dbt-upsolver/examples/](https://github.com/Upsolver/dbt-upsolver/tree/main/examples) +> projects examples link: [github.com/dbt-upsolver/examples/](https://github.com/Upsolver/dbt-upsolver/tree/main/examples) ## Connection options @@ -199,12 +199,12 @@ models: | aws_role | s3 | True | True | 'aws_role': `''` | | external_id | s3 | True | True | 'external_id': `''` | | aws_access_key_id | s3 | True | True | 'aws_access_key_id': `''` | -| aws_secret_access_key_id | s3 | True | True | 'aws_secret_access_key_id': `''` | +| aws_secret_access_key | s3 | True | True | 'aws_secret_access_key_id': `''` | | path_display_filter | s3 | True | True | 'path_display_filter': `''` | | path_display_filters | s3 | True | True | 'path_display_filters': (`''`, ...) | | read_only | s3 | True | True | 'read_only': True/False | | encryption_kms_key | s3 | True | True | 'encryption_kms_key': `''` | -| encryption_customer_kms_key | s3 | True | True | 'encryption_customer_kms_key': `''` | +| encryption_customer_managed_key | s3 | True | True | 'encryption_customer_kms_key': `''` | | comment | s3 | True | True | 'comment': `''` | | host | kafka | False | False | 'host': `''` | | hosts | kafka | False | False | 'hosts': (`''`, ...) | @@ -231,19 +231,19 @@ models: | aws_secret_access_key | kinesis | True | True | 'aws_secret_access_key': `''` | | region | kinesis | False | False | 'region': `''` | | read_only | kinesis | False | True | 'read_only': True/False | -| max_writers | kinesis | True | True | 'max_writers': `''` | +| max_writers | kinesis | True | True | 'max_writers': `` | | stream_display_filter | kinesis | True | True | 'stream_display_filter': `''` | | stream_display_filters | kinesis | True | True | 'stream_display_filters': (`''`, ...) | | comment | kinesis | True | True | 'comment': `''` | | connection_string | snowflake | True | False | 'connection_string': `''` | | user_name | snowflake | True | False | 'user_name': `''` | | password | snowflake | True | False | 'password': `''` | -| max_concurrent_connections | snowflake | True | True | 'max_concurrent_connections': `''` | +| max_concurrent_connections | snowflake | True | True | 'max_concurrent_connections': `` | | comment | snowflake | True | True | 'comment': `''` | | connection_string | redshift | True | False | 'connection_string': `''` | | user_name | redshift | True | False | 'user_name': `''` | | password | redshift | True | False | 'password': `''` | -| max_concurrent_connections | redshift | True | True | 'max_concurrent_connections': `''` | +| max_concurrent_connections | redshift | True | True | 'max_concurrent_connections': `` | | comment | redshift | True | True | 'comment': `''` | | connection_string | mysql | True | False | 'connection_string': `''` | | user_name | mysql | True | False | 'user_name': `''` | @@ -257,7 +257,15 @@ models: | user_name | elasticsearch | True | False | 'user_name': `''` | | password | elasticsearch | True | False | 'password': `''` | | comment | elasticsearch | True | True | 'comment': `''` | - +| connection_string | mongodb | True | False | 'connection_string': `''` | +| user_name | mongodb | True | False | 'user_name': `''` | +| password | mongodb | True | False | 'password': `''` | +| timeout | mongodb | True | True | 'timeout': "INTERVAL 'N' SECONDS" | +| comment | mongodb | True | True | 'comment': `''` | +| connection_string | mssql | True | False | 'connection_string': `''` | +| user_name | mssql | True | False | 'user_name': `''` | +| password | mssql | True | False | 'password': `''` | +| comment | mssql | True | True | 'comment': `''` | ## Target options @@ -268,7 +276,7 @@ models: | storage_location | datalake | False | True | 'storage_location': `''` | | compute_cluster | datalake | True | True | 'compute_cluster': `''` | | compression | datalake | True | True | 'compression': 'SNAPPY/GZIP' | -| compaction_processes | datalake | True | True | 'compaction_processes': `''` | +| compaction_processes | datalake | True | True | 'compaction_processes': `` | | disable_compaction | datalake | True | True | 'disable_compaction': True/False | | retention_date_partition | datalake | False | True | 'retention_date_partition': `''` | | table_data_retention | datalake | True | True | 'table_data_retention': `''` | @@ -284,32 +292,33 @@ models: | create_table_if_missing | snowflake | False | True | 'create_table_if_missing': True/False} | | run_interval | snowflake | False | True | 'run_interval': `''` | - ## Transformation options | Option | Storage | Editable | Optional | Config Syntax | | -------| --------- | -------- | -------- | ------------- | | run_interval | s3 | False | True | 'run_interval': `''` | -| start_from | s3 | False | True | 'start_from': `''` | -| end_at | s3 | True | True | 'end_at': `''` | +| start_from | s3 | False | True | 'start_from': `'/NOW/BEGINNING'` | +| end_at | s3 | True | True | 'end_at': `'/NOW'` | | compute_cluster | s3 | True | True | 'compute_cluster': `''` | | comment | s3 | True | True | 'comment': `''` | -| allow_cartesian_products | s3 | False | True | 'allow_cartesian_products': True/False | -| aggregation_parallelism | s3 | True | True | 'aggregation_parallelism': `''` | -| run_parallelism | s3 | True | True | 'run_parallelism': `''` | -| file_format | s3 | False | False | 'file_format': 'CSV/TSV ...' | +| skip_validations | s3 | False | True | 'skip_validations': ('ALLOW_CARTESIAN_PRODUCT', ...) | +| skip_all_validations | s3 | False | True | 'skip_all_validations': True/False | +| aggregation_parallelism | s3 | True | True | 'aggregation_parallelism': `` | +| run_parallelism | s3 | True | True | 'run_parallelism': `` | +| file_format | s3 | False | False | 'file_format': '(type = ``)' | | compression | s3 | False | True | 'compression': 'SNAPPY/GZIP ...' | | date_pattern | s3 | False | True | 'date_pattern': `''` | | output_offset | s3 | False | True | 'output_offset': `''` | -| location | s3 | False | False | 'location': `''` | | run_interval | elasticsearch | False | True | 'run_interval': `''` | -| start_from | elasticsearch | False | True | 'start_from': `''` | -| end_at | elasticsearch | True | True | 'end_at': `''` | +| routing_field_name | elasticsearch | True | True | 'routing_field_name': `''` | +| start_from | elasticsearch | False | True | 'start_from': `'/NOW/BEGINNING'` | +| end_at | elasticsearch | True | True | 'end_at': `'/NOW'` | | compute_cluster | elasticsearch | True | True | 'compute_cluster': `''` | -| allow_cartesian_products | elasticsearch | False | True | 'allow_cartesian_products': True/False | -| aggregation_parallelism | elasticsearch | True | True | 'aggregation_parallelism': `''` | -| run_parallelism | elasticsearch | True | True | 'run_parallelism': `''` | -| bulk_max_size_bytes | elasticsearch | True | True | 'bulk_max_size_bytes': `''` | +| skip_validations | elasticsearch | False | True | 'skip_validations': ('ALLOW_CARTESIAN_PRODUCT', ...) | +| skip_all_validations | elasticsearch | False | True | 'skip_all_validations': True/False | +| aggregation_parallelism | elasticsearch | True | True | 'aggregation_parallelism': `` | +| run_parallelism | elasticsearch | True | True | 'run_parallelism': `` | +| bulk_max_size_bytes | elasticsearch | True | True | 'bulk_max_size_bytes': `` | | index_partition_size | elasticsearch | True | True | 'index_partition_size': 'HOURLY/DAILY ...' | | comment | elasticsearch | True | True | 'comment': `''` | | custom_insert_expressions | snowflake | True | True | 'custom_insert_expressions': {'INSERT_TIME' : 'CURRENT_TIMESTAMP()','MY_VALUE': `''`} | @@ -317,70 +326,88 @@ models: | keep_existing_values_when_null | snowflake | True | True | 'keep_existing_values_when_null': True/False | | add_missing_columns | snowflake | False | True | 'add_missing_columns': True/False | | run_interval | snowflake | False | True | 'run_interval': `''` | -| start_from | snowflake | False | True | 'start_from': `''` | -| end_at | snowflake | True | True | 'end_at': `''` | +| commit_interval | snowflake | True | True | 'commit_interval': `''` | +| start_from | snowflake | False | True | 'start_from': `'/NOW/BEGINNING'` | +| end_at | snowflake | True | True | 'end_at': `'/NOW'` | | compute_cluster | snowflake | True | True | 'compute_cluster': `''` | -| allow_cartesian_products | snowflake | False | True | 'allow_cartesian_products': True/False | -| aggregation_parallelism | snowflake | True | True | 'aggregation_parallelism': `''` | -| run_parallelism | snowflake | True | True | 'run_parallelism': `''` | +| skip_validations | snowflake | False | True | 'skip_validations': ('ALLOW_CARTESIAN_PRODUCT', ...) | +| skip_all_validations | snowflake | False | True | 'skip_all_validations': True/False | +| aggregation_parallelism | snowflake | True | True | 'aggregation_parallelism': `` | +| run_parallelism | snowflake | True | True | 'run_parallelism': `` | | comment | snowflake | True | True | 'comment': `''` | | add_missing_columns | datalake | False | True | 'add_missing_columns': True/False | | run_interval | datalake | False | True | 'run_interval': `''` | -| start_from | datalake | False | True | 'start_from': `''` | -| end_at | datalake | True | True | 'end_at': `'' | +| start_from | datalake | False | True | 'start_from': `'/NOW/BEGINNING'` | +| end_at | datalake | True | True | 'end_at': `'/NOW'` | | compute_cluster | datalake | True | True | 'compute_cluster': `''` | -| allow_cartesian_products | datalake | False | True | 'allow_cartesian_products': True/False | -| aggregation_parallelism | datalake | True | True | 'aggregation_parallelism': `''` | -| run_parallelism | datalake | True | True | 'run_parallelism': `''` | +| skip_validations | datalake | False | True | 'skip_validations': ('ALLOW_CARTESIAN_PRODUCT', ...) | +| skip_all_validations | datalake | False | True | 'skip_all_validations': True/False | +| aggregation_parallelism | datalake | True | True | 'aggregation_parallelism': `` | +| run_parallelism | datalake | True | True | 'run_parallelism': `` | | comment | datalake | True | True | 'comment': `''` | | run_interval | redshift | False | True | 'run_interval': `''` | -| start_from | redshift | False | True | 'start_from': `''` | -| end_at | redshift | True | True | 'end_at': `'` | +| start_from | redshift | False | True | 'start_from': `'/NOW/BEGINNING'` | +| end_at | redshift | True | True | 'end_at': `'/NOW'` | | compute_cluster | redshift | True | True | 'compute_cluster': `''` | -| allow_cartesian_products | redshift | False | True | 'allow_cartesian_products': True/False | -| aggregation_parallelism | redshift | True | True | 'aggregation_parallelism': `''` | -| run_parallelism | redshift | True | True | 'run_parallelism': `''` | +| skip_validations | redshift | False | True | 'skip_validations': ('ALLOW_CARTESIAN_PRODUCT', ...) | +| skip_all_validations | redshift | False | True | 'skip_all_validations': True/False | +| aggregation_parallelism | redshift | True | True | 'aggregation_parallelism': `` | +| run_parallelism | redshift | True | True | 'run_parallelism': `` | | skip_failed_files | redshift | False | True | 'skip_failed_files': True/False | | fail_on_write_error | redshift | False | True | 'fail_on_write_error': True/False | | comment | redshift | True | True | 'comment': `''` | - +| run_interval | postgres | False | True | 'run_interval': `''` | +| start_from | postgres | False | True | 'start_from': `'/NOW/BEGINNING'` | +| end_at | postgres | True | True | 'end_at': `'/NOW'` | +| compute_cluster | postgres | True | True | 'compute_cluster': `''` | +| skip_validations | postgres | False | True | 'skip_validations': ('ALLOW_CARTESIAN_PRODUCT', ...) | +| skip_all_validations | postgres | False | True | 'skip_all_validations': True/False | +| aggregation_parallelism | postgres | True | True | 'aggregation_parallelism': `` | +| run_parallelism | postgres | True | True | 'run_parallelism': `` | +| comment | postgres | True | True | 'comment': `''` | ## Copy options | Option | Storage | Category | Editable | Optional | Config Syntax | | -------| ---------- | -------- | -------- | -------- | ------------- | -| topic | kafka | source_options | False | False | 'comment': `''` | +| topic | kafka | source_options | False | False | 'topic': `''` | | exclude_columns | kafka | job_options | False | True | 'exclude_columns': (`''`, ...) | | deduplicate_with | kafka | job_options | False | True | 'deduplicate_with': {'COLUMNS' : ['col1', 'col2'],'WINDOW': 'N HOURS'} | -| consumer_properties | kafka | job_options | True | True | 'comment': `''` | -| reader_shards | kafka | job_options | True | True | 'reader_shards': `''` | +| consumer_properties | kafka | job_options | True | True | 'consumer_properties': `''` | +| reader_shards | kafka | job_options | True | True | 'reader_shards': `` | | store_raw_data | kafka | job_options | False | True | 'store_raw_data': True/False | | start_from | kafka | job_options | False | True | 'start_from': 'BEGINNING/NOW' | -| end_at | kafka | job_options | True | True | 'end_at': `''` | +| end_at | kafka | job_options | True | True | 'end_at': `'/NOW'` | | compute_cluster | kafka | job_options | True | True | 'compute_cluster': `''` | -| run_parallelism | kafka | job_options | True | True | 'run_parallelism': `''` | +| run_parallelism | kafka | job_options | True | True | 'run_parallelism': `` | | content_type | kafka | job_options | True | True | 'content_type': 'AUTO/CSV/...' | | compression | kafka | job_options | False | True | 'compression': 'AUTO/GZIP/...' | +| column_transformations | kafka | job_options | False | True | 'column_transformations': {`''` : `''` , ...} | +| commit_interval | kafka | job_options | True | True | 'commit_interval': `''` | +| skip_validations | kafka | job_options | False | True | 'skip_validations': ('MISSING_TOPIC') | +| skip_all_validations | kafka | job_options | False | True | 'skip_all_validations': True/False | | comment | kafka | job_options | True | True | 'comment': `''` | | table_include_list | mysql | source_options | True | True | 'table_include_list': (`''`, ...) | | column_exclude_list | mysql | source_options | True | True | 'column_exclude_list': (`''`, ...) | | exclude_columns | mysql | job_options | False | True | 'exclude_columns': (`''`, ...) | | column_transformations | mysql | job_options | False | True | 'column_transformations': {`''` : `''` , ...} | | skip_snapshots | mysql | job_options | True | True | 'skip_snapshots': True/False | -| end_at | mysql | job_options | True | True | 'end_at': `''` | +| end_at | mysql | job_options | True | True | 'end_at': `'/NOW'` | | compute_cluster | mysql | job_options | True | True | 'compute_cluster': `''` | +| snapshot_parallelism | mysql | job_options | True | True | 'snapshot_parallelism': `` | +| ddl_filters | mysql | job_options | False | True | 'ddl_filters': (`''`, ...) | | comment | mysql | job_options | True | True | 'comment': `''` | | table_include_list | postgres | source_options | False | False | 'table_include_list': (`''`, ...) | | column_exclude_list | postgres | source_options | False | True | 'column_exclude_list': (`''`, ...) | | heartbeat_table | postgres | job_options | False | True | 'heartbeat_table': `''` | | skip_snapshots | postgres | job_options | False | True | 'skip_snapshots': True/False | | publication_name | postgres | job_options | False | False | 'publication_name': `''` | -| end_at | postgres | job_options | True | True | 'end_at': `''` | -| start_from | postgres | job_options | False | True | 'start_from': `''` | +| end_at | postgres | job_options | True | True | 'end_at': `'/NOW'` | | compute_cluster | postgres | job_options | True | True | 'compute_cluster': `''` | | comment | postgres | job_options | True | True | 'comment': `''` | | parse_json_columns | postgres | job_options | False | False | 'parse_json_columns': True/False | | column_transformations | postgres | job_options | False | True | 'column_transformations': {`''` : `''` , ...} | +| snapshot_parallelism | postgres | job_options | True | True | 'snapshot_parallelism': `` | | exclude_columns | postgres | job_options | False | True | 'exclude_columns': (`''`, ...) | | location | s3 | source_options | False | False | 'location': `''` | | date_pattern | s3 | job_options | False | True | 'date_pattern': `''` | @@ -389,25 +416,49 @@ models: | initial_load_prefix | s3 | job_options | False | True | 'initial_load_prefix': `''` | | delete_files_after_load | s3 | job_options | False | True | 'delete_files_after_load': True/False | | deduplicate_with | s3 | job_options | False | True | 'deduplicate_with': {'COLUMNS' : ['col1', 'col2'],'WINDOW': 'N HOURS'} | -| end_at | s3 | job_options | True | True | 'end_at': `''` | -| start_from | s3 | job_options | False | True | 'start_from': `''` | +| end_at | s3 | job_options | True | True | 'end_at': `'/NOW'` | +| start_from | s3 | job_options | False | True | 'start_from': `'/NOW/BEGINNING'` | | compute_cluster | s3 | job_options | True | True | 'compute_cluster': `''` | -| run_parallelism | s3 | job_options | True | True | 'run_parallelism': `''` | +| run_parallelism | s3 | job_options | True | True | 'run_parallelism': `` | | content_type | s3 | job_options | True | True | 'content_type': 'AUTO/CSV...' | | compression | s3 | job_options | False | True | 'compression': 'AUTO/GZIP...' | | comment | s3 | job_options | True | True | 'comment': `''` | | column_transformations | s3 | job_options | False | True | 'column_transformations': {`''` : `''` , ...} | +| commit_interval | s3 | job_options | True | True | 'commit_interval': `''` | +| skip_validations | s3 | job_options | False | True | 'skip_validations': ('EMPTY_PATH') | +| skip_all_validations | s3 | job_options | False | True | 'skip_all_validations': True/False | | exclude_columns | s3 | job_options | False | True | 'exclude_columns': (`''`, ...) | | stream | kinesis | source_options | False | False | 'stream': `''` | -| reader_shards | kinesis | job_options | True | True | 'reader_shards': `''` | +| reader_shards | kinesis | job_options | True | True | 'reader_shards': `` | | store_raw_data | kinesis | job_options | False | True | 'store_raw_data': True/False | -| start_from | kinesis | job_options | False | True | 'start_from': `''` | -| end_at | kinesis | job_options | False | True | 'end_at': `''` | +| start_from | kinesis | job_options | False | True | 'start_from': `'/NOW/BEGINNING'` | +| end_at | kinesis | job_options | False | True | 'end_at': `'/NOW'` | | compute_cluster | kinesis | job_options | True | True | 'compute_cluster': `''` | -| run_parallelism | kinesis | job_options | False | True | 'run_parallelism': `''` | +| run_parallelism | kinesis | job_options | False | True | 'run_parallelism': `` | | content_type | kinesis | job_options | True | True | 'content_type': 'AUTO/CSV...' | | compression | kinesis | job_options | False | True | 'compression': 'AUTO/GZIP...' | | comment | kinesis | job_options | True | True | 'comment': `''` | | column_transformations | kinesis | job_options | True | True | 'column_transformations': {`''` : `''` , ...} | | deduplicate_with | kinesis | job_options | False | True | 'deduplicate_with': {'COLUMNS' : ['col1', 'col2'],'WINDOW': 'N HOURS'} | +| commit_interval | kinesis | job_options | True | True | 'commit_interval': `''` | +| skip_validations | kinesis | job_options | False | True | 'skip_validations': ('MISSING_STREAM') | +| skip_all_validations | kinesis | job_options | False | True | 'skip_all_validations': True/False | | exclude_columns | kinesis | job_options | False | True | 'exclude_columns': (`''`, ...) | +| table_include_list | mssql | source_options | True | True | 'table_include_list': (`''`, ...) | +| column_exclude_list | mssql | source_options | True | True | 'column_exclude_list': (`''`, ...) | +| exclude_columns | mssql | job_options | False | True | 'exclude_columns': (`''`, ...) | +| column_transformations | mssql | job_options | False | True | 'column_transformations': {`''` : `''` , ...} | +| skip_snapshots | mssql | job_options | True | True | 'skip_snapshots': True/False | +| end_at | mssql | job_options | True | True | 'end_at': `'/NOW'` | +| compute_cluster | mssql | job_options | True | True | 'compute_cluster': `''` | +| snapshot_parallelism | mssql | job_options | True | True | 'snapshot_parallelism': `` | +| parse_json_columns | mssql | job_options | False | False | 'parse_json_columns': True/False | +| comment | mssql | job_options | True | True | 'comment': `''` | +| collection_include_list | mongodb | source_options | True | True | 'collection_include_list': (`''`, ...) | +| exclude_columns | mongodb | job_options | False | True | 'exclude_columns': (`''`, ...) | +| column_transformations | mongodb | job_options | False | True | 'column_transformations': {`''` : `''` , ...} | +| skip_snapshots | mongodb | job_options | True | True | 'skip_snapshots': True/False | +| end_at | mongodb | job_options | True | True | 'end_at': `'/NOW'` | +| compute_cluster | mongodb | job_options | True | True | 'compute_cluster': `''` | +| snapshot_parallelism | mongodb | job_options | True | True | 'snapshot_parallelism': `` | +| comment | mongodb | job_options | True | True | 'comment': `''` | From af4f431fef55c680d3e7d933dd30a292f944cbc5 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Fri, 1 Sep 2023 13:53:45 +0100 Subject: [PATCH 043/103] remove $ --- .../docs/docs/cloud/cloud-cli-installation.md | 58 ++++++++++--------- 1 file changed, 30 insertions(+), 28 deletions(-) diff --git a/website/docs/docs/cloud/cloud-cli-installation.md b/website/docs/docs/cloud/cloud-cli-installation.md index 5af9b7a173a..b72a89bc702 100644 --- a/website/docs/docs/cloud/cloud-cli-installation.md +++ b/website/docs/docs/cloud/cloud-cli-installation.md @@ -6,7 +6,9 @@ description: "Instructions for installing and configuring dbt Cloud CLI" :::warning Alpha functionality -The following installation instructions are for the dbt Cloud CLI, currently in alpha. These instructions are not intended for general audiences at this time. +The following installation instructions are for the dbt Cloud CLI, currently in Alpha product lifecycle (actively in development and being tested). + +These instructions are not intended for general audiences at this time. ::: @@ -14,7 +16,7 @@ The following installation instructions are for the dbt Cloud CLI, currently in ### Install and update with Brew on MacOS (recommended) -1. Install the CLI: +1. Install the dbt Cloud CLI: ```bash brew tap dbt-labs/dbt-cli @@ -30,7 +32,7 @@ dbt --help ### Manually install (Windows and Linux) -1. Download the latest release for your platform from [Github](https://github.com/dbt-labs/dbt-cli/releases). +1. Download the latest release for your platform from [GitHub](https://github.com/dbt-labs/dbt-cli/releases). 2. Add the `dbt` executable to your path. 3. Move to a directory with a dbt project, and create a `dbt_cloud.yml` file containing your `project-id` from dbt Cloud. 4. Invoke `dbt --help` from your terminal to see a list of supported commands. @@ -41,54 +43,54 @@ Follow the same process in [Installing dbt Cloud CLI](#manually-install-windows- ## Setting up the CLI -The following instructions are for setting up the dbt Cloud CLI. The `$` aren't part of the command, they tell you that you need to input this command. For example, `$ dbt run` means you should type `dbt run` into your terminal. +The following instructions are for setting up the dbt Cloud CLI. 1. Ensure that you have created a project in [dbt Cloud](https://cloud.getdbt.com/). 2. Ensure that your personal [development credentials](https://cloud.getdbt.com/settings/profile/credentials) are set on the project. -3. Navigate to [your profile](https://cloud.getdbt.com/settings/profile) and enable the "beta features" flag under "Experimental Features." +3. Navigate to [your profile](https://cloud.getdbt.com/settings/profile) and enable the **Beta** flag under **Experimental Features.** -4. Create an environment variable with your [dbt cloud API key](https://cloud.getdbt.com/settings/profile#api-access): +4. Create an environment variable with your [dbt Cloud API key](https://cloud.getdbt.com/settings/profile#api-access): ```bash +vi ~/.zshrc - > $ vi ~/.zshrc - - ... - - # dbt Cloud CLI - export DBT_CLOUD_API_KEY="1234" - +# dbt Cloud CLI +export DBT_CLOUD_API_KEY="1234" # Replace "1234" with your API key ``` -5. Load the new environment variable. Note: you may need to reactivate your python virtual environment after sourcing your shell's dot file. Alternatively, restart your shell instead of sourcing the shell's dot file +5. Load the new environment variable. Note: You may need to reactivate your Python virtual environment after sourcing your shell's dot file. Alternatively, restart your shell instead of sourcing the shell's dot file ```bash - > $ source ~/.zshrc +source ~/.zshrc ``` 6. Navigate to a dbt project ```bash - > $ cd ~/dbt-projects/jaffle_shop +cd ~/dbt-projects/jaffle_shop ``` -7. Create a dbt_cloud.yml in the root project directory. The file is required to have a `project-id` field with a valid [project ID](#glossary). Enter the following three commands: +6. Create a `dbt_cloud.yml` in the root project directory. The file is required to have a `project-id` field with a valid [project ID](#glossary). Enter the following commands: ```bash -> $ pwd -/Users/user/dbt-projects/jaffle_shop +pwd # Input +/Users/user/dbt-projects/jaffle_shop # Output +``` -> $ echo "project-id: ''" > dbt_cloud.yml +```bash +echo "project-id: ''" > dbt_cloud.yml # Input +``` -> $ cat dbt_cloud.yml -project-id: '123456' +```bash +cat dbt_cloud.yml # Input +project-id: '123456' # Output ``` -You can find your project ID by selecting your project and clicking on **Develop** in the navigation bar. Your project ID is the number in the URL: https://cloud.getdbt.com/develop/26228/projects/`PROJECT_ID`. +You can find your project ID by selecting your project and clicking on **Develop** in the navigation bar. Your project ID is the number in the URL: https://cloud.getdbt.com/develop/26228/projects/PROJECT_ID. -If dbt_cloud.yml already exists, edit the file and verify the project ID field uses a valid project ID. +If `dbt_cloud.yml` already exists, edit the file, and verify the project ID field uses a valid project ID. #### Upgrade the CLI with Brew @@ -101,8 +103,8 @@ brew upgrade dbt-cloud-cli **Coming soon** -### Glossary +## Glossary -- **dbt cloud API key:** your API key found by navigating to the **gear icon**, clicking **Profile Settings**, and scrolling down to **API**. -- **Project ID:** the ID of the dbt project you're working with. Can be retrieved from the dbt cloud URL after a project has been selected, for example, `https://cloud.getdbt.com/deploy/{accountID}/projects/{projectID}` -- **Development credentials:** your personal warehouse credentials for the project you’re working with. They can be set by selecting the project and entering them in dbt Cloud. Navigate to the **gear icon**, click **Profile Settings**, and click **Credentials** from the left-side menu. +- **dbt cloud API key:** Your API key found by navigating to the **gear icon**, clicking **Profile Settings**, and scrolling down to **API**. +- **Project ID:** The ID of the dbt project you're working with. Can be retrieved from the dbt Cloud URL after a project has been selected, for example, `https://cloud.getdbt.com/deploy/{accountID}/projects/{projectID}` +- **Development credentials:** Your personal warehouse credentials for the project you’re working with. They can be set by selecting the project and entering them in dbt Cloud. Navigate to the **gear icon**, click **Profile Settings**, and click **Credentials** from the left-side menu. From 68f57c65ff5fee26df78f7c8fb04adde7935eacd Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 1 Sep 2023 13:57:49 +0100 Subject: [PATCH 044/103] Update website/docs/docs/cloud/cloud-cli-installation.md --- website/docs/docs/cloud/cloud-cli-installation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/cloud-cli-installation.md b/website/docs/docs/cloud/cloud-cli-installation.md index b72a89bc702..3c7f30fc523 100644 --- a/website/docs/docs/cloud/cloud-cli-installation.md +++ b/website/docs/docs/cloud/cloud-cli-installation.md @@ -6,7 +6,7 @@ description: "Instructions for installing and configuring dbt Cloud CLI" :::warning Alpha functionality -The following installation instructions are for the dbt Cloud CLI, currently in Alpha product lifecycle (actively in development and being tested). +The following installation instructions are for the dbt Cloud CLI, currently in Alpha (actively in development and being tested). These instructions are not intended for general audiences at this time. From 21ec323b6266d4995ebe4c0295a6b95fc77a4a5b Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 1 Sep 2023 13:59:47 +0100 Subject: [PATCH 045/103] Update cloud-cli-installation.md --- website/docs/docs/cloud/cloud-cli-installation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/cloud-cli-installation.md b/website/docs/docs/cloud/cloud-cli-installation.md index 3c7f30fc523..68a8ef365d6 100644 --- a/website/docs/docs/cloud/cloud-cli-installation.md +++ b/website/docs/docs/cloud/cloud-cli-installation.md @@ -72,7 +72,7 @@ source ~/.zshrc cd ~/dbt-projects/jaffle_shop ``` -6. Create a `dbt_cloud.yml` in the root project directory. The file is required to have a `project-id` field with a valid [project ID](#glossary). Enter the following commands: +7. Create a `dbt_cloud.yml` in the root project directory. The file is required to have a `project-id` field with a valid [project ID](#glossary). Enter the following commands: ```bash pwd # Input From ca0823036ec630d66acb04f8f791957867e0827a Mon Sep 17 00:00:00 2001 From: mehdi mouloudj Date: Fri, 1 Sep 2023 16:31:50 +0200 Subject: [PATCH 046/103] update dbt-glue documentation according to latest version 1.6.2 --- .../core/connect-data-platform/glue-setup.md | 826 +++++++++++++++++- 1 file changed, 802 insertions(+), 24 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/glue-setup.md b/website/docs/docs/core/connect-data-platform/glue-setup.md index e0fb9556853..99d40db5c7a 100644 --- a/website/docs/docs/core/connect-data-platform/glue-setup.md +++ b/website/docs/docs/core/connect-data-platform/glue-setup.md @@ -65,7 +65,6 @@ ETL. Read [this documentation](https://docs.aws.amazon.com/glue/latest/dg/glue-is-security.html) to configure these principals. - You will find bellow a least privileged policy to enjoy all features of **`dbt-glue`** adapter. Please to update variables between **`<>`**, here are explanations of these arguments: @@ -74,7 +73,7 @@ Please to update variables between **`<>`**, here are explanations of these argu |---|---| |region|The region where your Glue database is stored | |AWS Account|The AWS account where you run your pipeline| -|dbt output database|The database updated by dbt (this is the database configured in the profile.yml of your dbt environment)| +|dbt output database|The database updated by dbt (this is the schema configured in the profile.yml of your dbt environment)| |dbt source database|All databases used as source| |dbt output bucket|The bucket name where the data will be generated by dbt (the location configured in the profile.yml of your dbt environment)| |dbt source bucket|The bucket name of source databases (if they are not managed by Lake Formation)| @@ -113,9 +112,19 @@ Please to update variables between **`<>`**, here are explanations of these argu "glue:BatchDeleteTableVersion", "glue:BatchDeleteTable", "glue:DeletePartition", + "glue:GetUserDefinedFunctions", "lakeformation:ListResources", "lakeformation:BatchGrantPermissions", - "lakeformation:ListPermissions" + "lakeformation:ListPermissions", + "lakeformation:GetDataAccess", + "lakeformation:GrantPermissions", + "lakeformation:RevokePermissions", + "lakeformation:BatchRevokePermissions", + "lakeformation:AddLFTagsToResource", + "lakeformation:RemoveLFTagsFromResource", + "lakeformation:GetResourceLFTags", + "lakeformation:ListLFTags", + "lakeformation:GetLFTag", ], "Resource": [ "arn:aws:glue:::catalog", @@ -189,7 +198,7 @@ Please to update variables between **`<>`**, here are explanations of these argu ### Configuration of the local environment -Because **`dbt`** and **`dbt-glue`** adapter are compatible with Python versions 3.8, and 3.9, check the version of Python: +Because **`dbt`** and **`dbt-glue`** adapter are compatible with Python versions 3.7, 3.8, and 3.9, check the version of Python: ```bash $ python3 --version @@ -212,12 +221,17 @@ $ unzip awscliv2.zip $ sudo ./aws/install ``` -Configure the aws-glue-session package +Install boto3 package ```bash $ sudo yum install gcc krb5-devel.x86_64 python3-devel.x86_64 -y $ pip3 install β€”upgrade boto3 -$ pip3 install β€”upgrade aws-glue-sessions +``` + +Install the package: + +```bash +$ pip3 install dbt-glue ``` ### Example config @@ -232,7 +246,6 @@ workers: 2 worker_type: G.1X idle_timeout: 10 schema: "dbt_demo" -database: "dbt_demo" session_provisioning_timeout_in_seconds: 120 location: "s3://dbt_demo_bucket/dbt_demo_data" ``` @@ -241,24 +254,788 @@ location: "s3://dbt_demo_bucket/dbt_demo_data" The table below describes all the options. -|Option |Description | Mandatory | -|---|---|---| -|project_name |The dbt project name. This must be the same as the one configured in the dbt project. |yes| -|type |The driver to use. |yes| -|query-comment |A string to inject as a comment in each query that dbt runs. |no| -|role_arn |The ARN of the interactive session role created as part of the CloudFormation template. |yes| -|region |The AWS Region where you run the data pipeline. |yes| -|workers |The number of workers of a defined workerType that are allocated when a job runs. |yes| -|worker_type |The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X. |yes| -|schema |The schema used to organize data stored in Amazon S3. |yes| -|database |The database in Lake Formation. The database stores metadata tables in the Data Catalog. |yes| -|session_provisioning_timeout_in_seconds |The timeout in seconds for AWS Glue interactive session provisioning. |yes| -|location |The Amazon S3 location of your target data. |yes| -|idle_timeout |The AWS Glue session idle timeout in minutes. (The session stops after being idle for the specified amount of time.) |no| -|glue_version |The version of AWS Glue for this session to use. Currently, the only valid options are 2.0 and 3.0. The default value is 2.0. |no| -|security_configuration |The security configuration to use with this session. |no| -|connections |A comma-separated list of connections to use in the session. |no| +| Option | Description | Mandatory | +|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| +| project_name | The dbt project name. This must be the same as the one configured in the dbt project. | yes | +| type | The driver to use. | yes | +| query-comment | A string to inject as a comment in each query that dbt runs. | no | +| role_arn | The ARN of the glue interactive session IAM role. | yes | +| region | The AWS Region were you run the data pipeline. | yes | +| workers | The number of workers of a defined workerType that are allocated when a job runs. | yes | +| worker_type | The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X. | yes | +| schema | The schema used to organize data stored in Amazon S3.Additionally, is the database in AWS Lake Formation that stores metadata tables in the Data Catalog. | yes | +| session_provisioning_timeout_in_seconds | The timeout in seconds for AWS Glue interactive session provisioning. | yes | +| location | The Amazon S3 location of your target data. | yes | +| query_timeout_in_minutes | The timeout in minutes for a signle query. Default is 300 | no | +| idle_timeout | The AWS Glue session idle timeout in minutes. (The session stops after being idle for the specified amount of time) | no | +| glue_version | The version of AWS Glue for this session to use. Currently, the only valid options are 2.0 and 3.0. The default value is 3.0. | no | +| security_configuration | The security configuration to use with this session. | no | +| connections | A comma-separated list of connections to use in the session. | no | +| conf | Specific configuration used at the startup of the Glue Interactive Session (arg --conf) | no | +| extra_py_files | Extra python Libs that can be used by the interactive session. | no | +| delta_athena_prefix | A prefix used to create Athena compatible tables for Delta tables (if not specified, then no Athena compatible table will be created) | no | +| tags | The map of key value pairs (tags) belonging to the session. Ex: `KeyName1=Value1,KeyName2=Value2` | no | +| seed_format | By default `parquet`, can be Spark format compatible like `csv` or `json` | no | +| seed_mode | By default `overwrite`, the seed data will be overwritten, you can set it to `append` if you just want to add new data in your dataset | no | +| default_arguments | The map of key value pairs parameters belonging to the session. More information on [Job parameters used by AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html). Ex: `--enable-continuous-cloudwatch-log=true,--enable-continuous-log-filter=true` | no | +| glue_session_id | re-use the glue-session to run multiple dbt run commands: set a glue session id you need to use | no | +| glue_session_reuse | re-use the glue-session to run multiple dbt run commands: If set to true, the glue session will not be closed for re-use. If set to false, the session will be closed | no | +| datalake_formats | The ACID datalake format that you want to use if you are doing merge, can be `hudi`, `Γ¬ceberg` or `delta` |no| + +## Configs + +### Configuring tables + +When materializing a model as `table`, you may include several optional configs that are specific to the dbt-spark plugin, in addition to the standard [model configs](/reference/model-configs). + +| Option | Description | Required? | Example | +|---------|----------------------------------------------------|-------------------------|--------------------------| +| file_format | The file format to use when creating tables (`parquet`, `csv`, `json`, `text`, `jdbc` or `orc`). | Optional | `parquet`| +| partition_by | Partition the created table by the specified columns. A directory is created for each partition. | Optional | `date_day` | +| clustered_by | Each partition in the created table will be split into a fixed number of buckets by the specified columns. | Optional | `country_code` | +| buckets | The number of buckets to create while clustering | Required if `clustered_by` is specified | `8` | +| custom_location | By default, the adapter will store your data in the following path: `location path`/`schema`/`table`. If you don't want to follow that default behaviour, you can use this parameter to set your own custom location on S3 | No | `s3://mycustombucket/mycustompath` | +| hudi_options | When using file_format `hudi`, gives the ability to overwrite any of the default configuration options. | Optional | `{'hoodie.schema.on.read.enable': 'true'}` | +## Incremental models + +dbt seeks to offer useful and intuitive modeling abstractions by means of its built-in configurations and materializations. + +For that reason, the dbt-glue plugin leans heavily on the [`incremental_strategy` config](/docs/build/incremental-models). This config tells the incremental materialization how to build models in runs beyond their first. It can be set to one of three values: + - **`append`** (default): Insert new records without updating or overwriting any existing data. + - **`insert_overwrite`**: If `partition_by` is specified, overwrite partitions in the table with new data. If no `partition_by` is specified, overwrite the entire table with new data. + - **`merge`** (Apache Hudi and Apache Iceberg only): Match records based on a `unique_key`; update old records, insert new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.) + +Each of these strategies has its pros and cons, which we'll discuss below. As with any model config, `incremental_strategy` may be specified in `dbt_project.yml` or within a model file's `config()` block. + +**Notes:** +The default strategy is **`insert_overwrite`** + +### The `append` strategy + +Following the `append` strategy, dbt will perform an `insert into` statement with all new data. The appeal of this strategy is that it is straightforward and functional across all platforms, file types, connection methods, and Apache Spark versions. However, this strategy _cannot_ update, overwrite, or delete existing data, so it is likely to insert duplicate records for many data sources. + +#### Source code +```sql +{{ config( + materialized='incremental', + incremental_strategy='append', +) }} + +-- All rows returned by this query will be appended to the existing table + +select * from {{ ref('events') }} +{% if is_incremental() %} + where event_ts > (select max(event_ts) from {{ this }}) +{% endif %} +``` +#### Run Code +```sql +create temporary view spark_incremental__dbt_tmp as + + select * from analytics.events + + where event_ts >= (select max(event_ts) from {{ this }}) + +; + +insert into table analytics.spark_incremental + select `date_day`, `users` from spark_incremental__dbt_tmp +``` + +### The `insert_overwrite` strategy + +This strategy is most effective when specified alongside a `partition_by` clause in your model config. dbt will run an [atomic `insert overwrite` statement](https://spark.apache.org/docs/latest/sql-ref-syntax-dml-insert-overwrite-table.html) that dynamically replaces all partitions included in your query. Be sure to re-select _all_ of the relevant data for a partition when using this incremental strategy. + +If no `partition_by` is specified, then the `insert_overwrite` strategy will atomically replace all contents of the table, overriding all existing data with only the new records. The column schema of the table remains the same, however. This can be desirable in some limited circumstances, since it minimizes downtime while the table contents are overwritten. The operation is comparable to running `truncate` + `insert` on other databases. For atomic replacement of Delta-formatted tables, use the `table` materialization (which runs `create or replace`) instead. + +#### Source Code +```sql +{{ config( + materialized='incremental', + partition_by=['date_day'], + file_format='parquet' +) }} + +/* + Every partition returned by this query will be overwritten + when this model runs +*/ + +with new_events as ( + + select * from {{ ref('events') }} + + {% if is_incremental() %} + where date_day >= date_add(current_date, -1) + {% endif %} + +) + +select + date_day, + count(*) as users + +from events +group by 1 +``` + +#### Run Code + +```sql +create temporary view spark_incremental__dbt_tmp as + + with new_events as ( + + select * from analytics.events + + + where date_day >= date_add(current_date, -1) + + + ) + + select + date_day, + count(*) as users + + from events + group by 1 + +; + +insert overwrite table analytics.spark_incremental + partition (date_day) + select `date_day`, `users` from spark_incremental__dbt_tmp +``` + +Specifying `insert_overwrite` as the incremental strategy is optional, since it's the default strategy used when none is specified. + +### The `merge` strategy + +**Compatibility:** +- Hudi : OK +- Delta Lake : OK +- Iceberg : OK +- Lake Formation Governed Tables : On going + +NB: + +- For Glue 3: you have to setup a [Glue connectors](https://docs.aws.amazon.com/glue/latest/ug/connectors-chapter.html). + +- For Glue 4: use the `datalake_formats` option in your profile.yml + +When using a connector be sure that your IAM role has these policies: +``` +{ + "Sid": "access_to_connections", + "Action": [ + "glue:GetConnection", + "glue:GetConnections" + ], + "Resource": [ + "arn:aws:glue:::catalog", + "arn:aws:glue:::connection/*" + ], + "Effect": "Allow" +} +``` +and that the managed policy `AmazonEC2ContainerRegistryReadOnly` is attached. +Be sure that you follow the getting started instructions [here](https://docs.aws.amazon.com/glue/latest/ug/setting-up.html#getting-started-min-privs-connectors). + + +This [blog post](https://aws.amazon.com/blogs/big-data/part-1-integrate-apache-hudi-delta-lake-apache-iceberg-datasets-at-scale-aws-glue-studio-notebook/) also explain how to setup and works with Glue Connectors + +#### Hudi + +**Usage notes:** The `merge` with Hudi incremental strategy requires: +- To add `file_format: hudi` in your table configuration +- To add a datalake_formats in your profile : `datalake_formats: hudi` + - Alternatively, to add a connections in your profile : `connections: name_of_your_hudi_connector` +- To add Kryo serializer in your Interactive Session Config (in your profile): `conf: spark.serializer=org.apache.spark.serializer.KryoSerializer --conf spark.sql.hive.convertMetastoreParquet=false` + +dbt will run an [atomic `merge` statement](https://hudi.apache.org/docs/writing_data#spark-datasource-writer) which looks nearly identical to the default merge behavior on Snowflake and BigQuery. If a `unique_key` is specified (recommended), dbt will update old records with values from new records that match on the key column. If a `unique_key` is not specified, dbt will forgo match criteria and simply insert all new records (similar to `append` strategy). + +#### Profile config example +```yaml +test_project: + target: dev + outputs: + dev: + type: glue + query-comment: my comment + role_arn: arn:aws:iam::1234567890:role/GlueInteractiveSessionRole + region: eu-west-1 + glue_version: "4.0" + workers: 2 + worker_type: G.1X + schema: "dbt_test_project" + session_provisioning_timeout_in_seconds: 120 + location: "s3://aws-dbt-glue-datalake-1234567890-eu-west-1/" + conf: spark.serializer=org.apache.spark.serializer.KryoSerializer --conf spark.sql.hive.convertMetastoreParquet=false + datalake_formats: hudi +``` + +#### Source Code example +```sql +{{ config( + materialized='incremental', + incremental_strategy='merge', + unique_key='user_id', + file_format='hudi', + hudi_options={ + 'hoodie.datasource.write.precombine.field': 'eventtime', + } +) }} + +with new_events as ( + + select * from {{ ref('events') }} + + {% if is_incremental() %} + where date_day >= date_add(current_date, -1) + {% endif %} + +) + +select + user_id, + max(date_day) as last_seen + +from events +group by 1 +``` + +#### Delta + +You can also use Delta Lake to be able to use merge feature on tables. + +**Usage notes:** The `merge` with Delta incremental strategy requires: +- To add `file_format: delta` in your table configuration +- To add a datalake_formats in your profile : `datalake_formats: delta` + - Alternatively, to add a connections in your profile : `connections: name_of_your_delta_connector` +- To add the following config in your Interactive Session Config (in your profile): `conf: "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog` + +**Athena:** Athena is not compatible by default with delta tables, but you can configure the adapter to create Athena tables on top of your delta table. To do so, you need to configure the two following options in your profile: +- For Delta Lake 2.1.0 supported natively in Glue 4.0: `extra_py_files: "/opt/aws_glue_connectors/selected/datalake/delta-core_2.12-2.1.0.jar"` +- For Delta Lake 1.0.0 supported natively in Glue 3.0: `extra_py_files: "/opt/aws_glue_connectors/selected/datalake/delta-core_2.12-1.0.0.jar"` +- `delta_athena_prefix: "the_prefix_of_your_choice"` +- If your table is partitioned, then the add of new partition is not automatic, you need to perform an `MSCK REPAIR TABLE your_delta_table` after each new partition adding + +#### Profile config example +```yaml +test_project: + target: dev + outputs: + dev: + type: glue + query-comment: my comment + role_arn: arn:aws:iam::1234567890:role/GlueInteractiveSessionRole + region: eu-west-1 + glue_version: "4.0" + workers: 2 + worker_type: G.1X + schema: "dbt_test_project" + session_provisioning_timeout_in_seconds: 120 + location: "s3://aws-dbt-glue-datalake-1234567890-eu-west-1/" + datalake_formats: delta + conf: "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog" + extra_py_files: "/opt/aws_glue_connectors/selected/datalake/delta-core_2.12-2.1.0.jar" + delta_athena_prefix: "delta" +``` + +#### Source Code example +```sql +{{ config( + materialized='incremental', + incremental_strategy='merge', + unique_key='user_id', + partition_by=['dt'], + file_format='delta' +) }} + +with new_events as ( + + select * from {{ ref('events') }} + + {% if is_incremental() %} + where date_day >= date_add(current_date, -1) + {% endif %} + +) + +select + user_id, + max(date_day) as last_seen, + current_date() as dt + +from events +group by 1 +``` + +#### Iceberg + +**Usage notes:** The `merge` with Iceberg incremental strategy requires: +- To attach the AmazonEC2ContainerRegistryReadOnly Manged policy to your execution role : +- To add the following policy to your execution role to enable commit locking in a dynamodb table (more info [here](https://iceberg.apache.org/docs/latest/aws/#dynamodb-lock-manager)). Note that the DynamoDB table specified in the ressource field of this policy should be the one that is mentionned in your dbt profiles (`--conf spark.sql.catalog.glue_catalog.lock.table=myGlueLockTable`). By default, this table is named `myGlueLockTable` and is created automatically (with On-Demand Pricing) when running a dbt-glue model with Incremental Materialization and Iceberg file format. If you want to name the table differently or to create your own table without letting Glue do it on your behalf, please provide the `iceberg_glue_commit_lock_table` parameter with your table name (eg. `MyDynamoDbTable`) in your dbt profile. +```yaml +iceberg_glue_commit_lock_table: "MyDynamoDbTable" +``` +- the latest connector for iceberg in AWS marketplace uses Ver 0.14.0 for Glue 3.0, and Ver 1.2.1 for Glue 4.0 where Kryo serialization fails when writing iceberg, use "org.apache.spark.serializer.JavaSerializer" for spark.serializer instead, more info [here](https://github.com/apache/iceberg/pull/546) + +Make sure you update your conf with `--conf spark.sql.catalog.glue_catalog.lock.table=` and, you change the below iam permission with your correct table name. +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "CommitLockTable", + "Effect": "Allow", + "Action": [ + "dynamodb:CreateTable", + "dynamodb:BatchGetItem", + "dynamodb:BatchWriteItem", + "dynamodb:ConditionCheckItem", + "dynamodb:PutItem", + "dynamodb:DescribeTable", + "dynamodb:DeleteItem", + "dynamodb:GetItem", + "dynamodb:Scan", + "dynamodb:Query", + "dynamodb:UpdateItem" + ], + "Resource": "arn:aws:dynamodb:::table/myGlueLockTable" + } + ] +} +``` +- To add `file_format: Iceberg` in your table configuration +- To add a datalake_formats in your profile : `datalake_formats: iceberg` + - Alternatively, to add a connections in your profile : `connections: name_of_your_iceberg_connector` ( + - For Athena version 3: + - The adapter is compatible with the Iceberg Connector from AWS Marketplace with Glue 3.0 as Fulfillment option and 0.14.0 (Oct 11, 2022) as Software version) + - the latest connector for iceberg in AWS marketplace uses Ver 0.14.0 for Glue 3.0, and Ver 1.2.1 for Glue 4.0 where Kryo serialization fails when writing iceberg, use "org.apache.spark.serializer.JavaSerializer" for spark.serializer instead, more info [here](https://github.com/apache/iceberg/pull/546) + - For Athena version 2: The adapter is compatible with the Iceberg Connector from AWS Marketplace with Glue 3.0 as Fulfillment option and 0.12.0-2 (Feb 14, 2022) as Software version) +- To add the following config in your Interactive Session Config (in your profile): +```--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions + --conf spark.serializer=org.apache.spark.serializer.KryoSerializer + --conf spark.sql.warehouse=s3:// + --conf spark.sql.catalog.glue_catalog=org.apache.iceberg.spark.SparkCatalog + --conf spark.sql.catalog.glue_catalog.catalog-impl=org.apache.iceberg.aws.glue.GlueCatalog + --conf spark.sql.catalog.glue_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO + --conf spark.sql.catalog.glue_catalog.lock-impl=org.apache.iceberg.aws.dynamodb.DynamoDbLockManager + --conf spark.sql.catalog.glue_catalog.lock.table=myGlueLockTable + --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions +``` + - For Glue 3.0, set `spark.sql.catalog.glue_catalog.lock-impl` to `org.apache.iceberg.aws.glue.DynamoLockManager` instead + +dbt will run an [atomic `merge` statement](https://iceberg.apache.org/docs/latest/spark-writes/) which looks nearly identical to the default merge behavior on Snowflake and BigQuery. You need to provide a `unique_key` to perform merge operation otherwise it will fail. This key is to provide in a Python list format and can contains multiple column name to create a composite unique_key. + +##### Notes +- When using a custom_location in Iceberg, avoid to use final trailing slash. Adding a final trailing slash lead to an un-proper handling of the location, and issues when reading the data from query engines like Trino. The issue should be fixed for Iceberg version > 0.13. Related Github issue can be find [here](https://github.com/apache/iceberg/issues/4582). +- Iceberg also supports `insert_overwrite` and `append` strategies. +- The `warehouse` conf must be provided, but it's overwritten by the adapter `location` in your profile or `custom_location` in model configuration. +- By default, this materialization has `iceberg_expire_snapshots` set to 'True', if you need to have historical auditable changes, set: `iceberg_expire_snapshots='False'`. +- Currently, due to some dbt internal, the iceberg catalog used internally when running glue interactive sessions with dbt-glue has a hardcoded name `glue_catalog`. This name is an alias pointing to the AWS Glue Catalog but is specific to each session. If you want to interact with your data in another session without using dbt-glue (from a Glue Studio notebook, for example), you can configure another alias (ie. another name for the Iceberg Catalog). To illustrate this concept, you can set in your configuration file : +``` +--conf spark.sql.catalog.RandomCatalogName=org.apache.iceberg.spark.SparkCatalog +``` +And then run in an AWS Glue Studio Notebook a session with the following config: +``` +--conf spark.sql.catalog.AnotherRandomCatalogName=org.apache.iceberg.spark.SparkCatalog +``` +In both cases, the underlying catalog would be the AWS Glue Catalog, unique in your AWS Account and Region, and you would be able to work with the exact same data. Also make sure that if you change the name of the Glue Catalog Alias, you change it in all the other `--conf` where it's used: +``` + --conf spark.sql.catalog.RandomCatalogName=org.apache.iceberg.spark.SparkCatalog + --conf spark.sql.catalog.RandomCatalogName.catalog-impl=org.apache.iceberg.aws.glue.GlueCatalog + ... + --conf spark.sql.catalog.RandomCatalogName.lock-impl=org.apache.iceberg.aws.glue.DynamoLockManager +``` +- A full reference to `table_properties` can be found [here](https://iceberg.apache.org/docs/latest/configuration/). +- Iceberg Tables are natively supported by Athena. Therefore, you can query tables created and operated with dbt-glue adapter from Athena. +- Incremental Materialization with Iceberg file format supports dbt snapshot. You are able to run a dbt snapshot command that queries an Iceberg Table and create a dbt fashioned snapshot of it. + +#### Profile config example +```yaml +test_project: + target: dev + outputs: + dev: + type: glue + query-comment: my comment + role_arn: arn:aws:iam::1234567890:role/GlueInteractiveSessionRole + region: eu-west-1 + glue_version: "4.0" + workers: 2 + worker_type: G.1X + schema: "dbt_test_project" + session_provisioning_timeout_in_seconds: 120 + location: "s3://aws-dbt-glue-datalake-1234567890-eu-west-1/" + datalake_formats: iceberg + conf: --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --conf spark.sql.warehouse=s3://aws-dbt-glue-datalake-1234567890-eu-west-1/dbt_test_project --conf spark.sql.catalog.glue_catalog=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.glue_catalog.catalog-impl=org.apache.iceberg.aws.glue.GlueCatalog --conf spark.sql.catalog.glue_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO --conf spark.sql.catalog.glue_catalog.lock-impl=org.apache.iceberg.aws.dynamodb.DynamoDbLockManager --conf spark.sql.catalog.glue_catalog.lock.table=myGlueLockTable --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions +``` + +#### Source Code example +```sql +{{ config( + materialized='incremental', + incremental_strategy='merge', + unique_key=['user_id'], + file_format='iceberg', + iceberg_expire_snapshots='False', + partition_by=['status'] + table_properties={'write.target-file-size-bytes': '268435456'} +) }} + +with new_events as ( + + select * from {{ ref('events') }} + + {% if is_incremental() %} + where date_day >= date_add(current_date, -1) + {% endif %} + +) + +select + user_id, + max(date_day) as last_seen + +from events +group by 1 +``` +#### Iceberg Snapshot source code example +```sql + +{% snapshot demosnapshot %} + +{{ + config( + strategy='timestamp', + target_schema='jaffle_db', + updated_at='dt', + file_format='iceberg' +) }} + +select * from {{ ref('customers') }} + +{% endsnapshot %} + +``` + +## Monitoring your Glue Interactive Session + +Monitoring is an important part of maintaining the reliability, availability, +and performance of AWS Glue and your other AWS solutions. AWS provides monitoring +tools that you can use to watch AWS Glue, identify the required number of workers +required for your Glue Interactive Session, report when something is wrong and +take action automatically when appropriate. AWS Glue provides Spark UI, +and CloudWatch logs and metrics for monitoring your AWS Glue jobs. +More information on: [Monitoring AWS Glue Spark jobs](https://docs.aws.amazon.com/glue/latest/dg/monitor-spark.html) + +**Usage notes:** Monitoring requires: +- To add the following IAM policy to your IAM role: +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "CloudwatchMetrics", + "Effect": "Allow", + "Action": "cloudwatch:PutMetricData", + "Resource": "*", + "Condition": { + "StringEquals": { + "cloudwatch:namespace": "Glue" + } + } + }, + { + "Sid": "CloudwatchLogs", + "Effect": "Allow", + "Action": [ + "s3:PutObject", + "logs:CreateLogStream", + "logs:CreateLogGroup", + "logs:PutLogEvents" + ], + "Resource": [ + "arn:aws:logs:*:*:/aws-glue/*", + "arn:aws:s3:::bucket-to-write-sparkui-logs/*" + ] + } + ] +} +``` + +- To add monitoring parameters in your Interactive Session Config (in your profile). +More information on [Job parameters used by AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html) + +#### Profile config example +```yaml +test_project: + target: dev + outputs: + dev: + type: glue + query-comment: my comment + role_arn: arn:aws:iam::1234567890:role/GlueInteractiveSessionRole + region: eu-west-1 + glue_version: "4.0" + workers: 2 + worker_type: G.1X + schema: "dbt_test_project" + session_provisioning_timeout_in_seconds: 120 + location: "s3://aws-dbt-glue-datalake-1234567890-eu-west-1/" + default_arguments: "--enable-metrics=true, --enable-continuous-cloudwatch-log=true, --enable-continuous-log-filter=true, --enable-spark-ui=true, --spark-event-logs-path=s3://bucket-to-write-sparkui-logs/dbt/" +``` + +If you want to use the Spark UI, you can launch the Spark history server using a +AWS CloudFormation template that hosts the server on an EC2 instance, +or launch locally using Docker. More information on [Launching the Spark history server](https://docs.aws.amazon.com/glue/latest/dg/monitor-spark-ui-history.html#monitor-spark-ui-history-local) + +## Enabling AWS Glue Auto Scaling +Auto Scaling is available since AWS Glue version 3.0 or later. More information +on the following AWS blog post: ["Introducing AWS Glue Auto Scaling: Automatically resize serverless computing resources for lower cost with optimized Apache Spark"](https://aws.amazon.com/blogs/big-data/introducing-aws-glue-auto-scaling-automatically-resize-serverless-computing-resources-for-lower-cost-with-optimized-apache-spark/) + +With Auto Scaling enabled, you will get the following benefits: + +* AWS Glue automatically adds and removes workers from the cluster depending on the parallelism at each stage or microbatch of the job run. + +* It removes the need for you to experiment and decide on the number of workers to assign for your AWS Glue Interactive sessions. + +* Once you choose the maximum number of workers, AWS Glue will choose the right size resources for the workload. +* You can see how the size of the cluster changes during the Glue Interactive sessions run by looking at CloudWatch metrics. +More information on [Monitoring your Glue Interactive Session](#Monitoring-your-Glue-Interactive-Session). + +**Usage notes:** AWS Glue Auto Scaling requires: +- To set your AWS Glue version 3.0 or later. +- To set the maximum number of workers (if Auto Scaling is enabled, the `workers` +parameter sets the maximum number of workers) +- To set the `--enable-auto-scaling=true` parameter on your Glue Interactive Session Config (in your profile). +More information on [Job parameters used by AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html) + +#### Profile config example +```yaml +test_project: + target: dev + outputs: + dev: + type: glue + query-comment: my comment + role_arn: arn:aws:iam::1234567890:role/GlueInteractiveSessionRole + region: eu-west-1 + glue_version: "3.0" + workers: 2 + worker_type: G.1X + schema: "dbt_test_project" + session_provisioning_timeout_in_seconds: 120 + location: "s3://aws-dbt-glue-datalake-1234567890-eu-west-1/" + default_arguments: "--enable-auto-scaling=true" +``` + +## Access Glue catalog in another AWS account +In many cases, you may need to run you dbt jobs to read from another AWS account. + +Review the following link https://repost.aws/knowledge-center/glue-tables-cross-accounts to set up access policies in source and target accounts + +Add the following `"spark.hadoop.hive.metastore.glue.catalogid="` to your conf in the DBT profile, as such, you can have multiple outputs for each of the accounts that you have access to. + +Note: The access cross-accounts need to be within the same AWS Region +#### Profile config example +```yaml +test_project: + target: dev + outputsAccountB: + dev: + type: glue + query-comment: my comment + role_arn: arn:aws:iam::1234567890:role/GlueInteractiveSessionRole + region: eu-west-1 + glue_version: "3.0" + workers: 2 + worker_type: G.1X + schema: "dbt_test_project" + session_provisioning_timeout_in_seconds: 120 + location: "s3://aws-dbt-glue-datalake-1234567890-eu-west-1/" + conf: "--conf hive.metastore.client.factory.class=com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory + --conf spark.hadoop.hive.metastore.glue.catalogid=" +``` + +## Persisting model descriptions + +Relation-level docs persistence is supported since dbt v0.17.0. For more +information on configuring docs persistence, see [the docs](/reference/resource-configs/persist_docs). + +When the `persist_docs` option is configured appropriately, you'll be able to +see model descriptions in the `Comment` field of `describe [table] extended` +or `show table extended in [database] like '*'`. + +## Always `schema`, never `database` + +Apache Spark uses the terms "schema" and "database" interchangeably. dbt understands +`database` to exist at a higher level than `schema`. As such, you should _never_ +use or set `database` as a node config or in the target profile when running dbt-glue. + +If you want to control the schema/database in which dbt will materialize models, +use the `schema` config and `generate_schema_name` macro _only_. +For more information, check the dbt documentation about [custom schemas](https://docs.getdbt.com/docs/build/custom-schemas). + +## AWS Lakeformation integration +The adapter supports AWS Lake Formation tags management enabling you to associate existing tags defined out of dbt-glue to database objects built by dbt-glue (database, table, view, snapshot, incremental models, seeds). + +- You can enable or disable lf-tags management via config, at model and dbt-project level (disabled by default) +- If enabled, lf-tags will be updated on every dbt run. There are table level lf-tags configs and column-level lf-tags configs. +- You can specify that you want to drop existing database, table column Lake Formation tags by setting the drop_existing config field to True (False by default, meaning existing tags are kept) +- Please note that if the tag you want to associate with the table does not exist, the dbt-glue execution will throw an error + +The adapter also supports AWS Lakeformation data cell filtering. +- You can enable or disable data-cell filtering via config, at model and dbt-project level (disabled by default) +- If enabled, data_cell_filters will be updated on every dbt run. +- You can specify that you want to drop existing table data-cell filters by setting the drop_existing config field to True (False by default, meaning existing filters are kept) +- You can leverage excluded_columns_names **OR** columns config fields to perform Column level security as well. **Please note that you can use one or the other but not both**. +- By default, if you don't specify any column or excluded_columns, dbt-glue does not perform Column level filtering and let the principal access all the columns. + +The below configuration let the specified principal (lf-data-scientist IAM user) access rows that have a customer_lifetime_value > 15 and all the columns specified ('customer_id', 'first_order', 'most_recent_order', 'number_of_orders') + +```sql +lf_grants={ + 'data_cell_filters': { + 'enabled': True, + 'drop_existing' : True, + 'filters': { + 'the_name_of_my_filter': { + 'row_filter': 'customer_lifetime_value>15', + 'principals': ['arn:aws:iam::123456789:user/lf-data-scientist'], + 'column_names': ['customer_id', 'first_order', 'most_recent_order', 'number_of_orders'] + } + }, + } + } +``` +The below configuration let the specified principal (lf-data-scientist IAM user) access rows that have a customer_lifetime_value > 15 and all the columns *except* the one specified ('first_name') + +```sql +lf_grants={ + 'data_cell_filters': { + 'enabled': True, + 'drop_existing' : True, + 'filters': { + 'the_name_of_my_filter': { + 'row_filter': 'customer_lifetime_value>15', + 'principals': ['arn:aws:iam::123456789:user/lf-data-scientist'], + 'excluded_column_names': ['first_name'] + } + }, + } + } +``` + +See below some examples of how you can integrate LF Tags management and data cell filtering to your configurations : + +#### At model level +This way of defining your Lakeformation rules is appropriate if you want to handle the tagging and filtering policy at object level. Remember that it overrides any configuration defined at dbt-project level. + +```sql +{{ config( + materialized='incremental', + unique_key="customer_id", + incremental_strategy='append', + lf_tags_config={ + 'enabled': true, + 'drop_existing' : False, + 'tags_database': + { + 'name_of_my_db_tag': 'value_of_my_db_tag' + }, + 'tags_table': + { + 'name_of_my_table_tag': 'value_of_my_table_tag' + }, + 'tags_columns': { + 'name_of_my_lf_tag': { + 'value_of_my_tag': ['customer_id', 'customer_lifetime_value', 'dt'] + }}}, + lf_grants={ + 'data_cell_filters': { + 'enabled': True, + 'drop_existing' : True, + 'filters': { + 'the_name_of_my_filter': { + 'row_filter': 'customer_lifetime_value>15', + 'principals': ['arn:aws:iam::123456789:user/lf-data-scientist'], + 'excluded_column_names': ['first_name'] + } + }, + } + } +) }} + + select + customers.customer_id, + customers.first_name, + customers.last_name, + customer_orders.first_order, + customer_orders.most_recent_order, + customer_orders.number_of_orders, + customer_payments.total_amount as customer_lifetime_value, + current_date() as dt + + from customers + + left join customer_orders using (customer_id) + + left join customer_payments using (customer_id) + +``` + +#### At dbt-project level +This way you can specify tags and data filtering policy for a particular path in your dbt project (eg. models, seeds, models/model_group1, etc.) +This is especially useful for seeds, for which you can't define configuration in the file directly. + +```yml +seeds: + +lf_tags_config: + enabled: true + tags_table: + name_of_my_table_tag: 'value_of_my_table_tag' + tags_database: + name_of_my_database_tag: 'value_of_my_database_tag' +models: + +lf_tags_config: + enabled: true + drop_existing: True + tags_database: + name_of_my_database_tag: 'value_of_my_database_tag' + tags_table: + name_of_my_table_tag: 'value_of_my_table_tag' +``` + +## Tests + +To perform a functional test: +1. Install dev requirements: +```bash +$ pip3 install -r dev-requirements.txt +``` + +2. Install dev locally +```bash +$ python3 setup.py build && python3 setup.py install_lib +``` + +3. Export variables +```bash +$ export DBT_S3_LOCATION=s3://mybucket/myprefix +$ export DBT_ROLE_ARN=arn:aws:iam::1234567890:role/GlueInteractiveSessionRole +``` + +4. Run the test +```bash +$ python3 -m pytest tests/functional +``` + +For more information, check the dbt documentation about [testing a new adapter](https://docs.getdbt.com/docs/contributing/testing-a-new-adapter). ## Caveats @@ -269,6 +1046,7 @@ Most dbt Core functionality is supported, but some features are only available w Apache Hudi-only features: 1. Incremental model updates by `unique_key` instead of `partition_by` (see [`merge` strategy](/reference/resource-configs/glue-configs#the-merge-strategy)) + Some dbt features, available on the core adapters, are not yet supported on Glue: 1. [Persisting](/reference/resource-configs/persist_docs) column-level descriptions as database comments 2. [Snapshots](/docs/build/snapshots) From 3df67d7c3dce8cac92ba5c31a02ad32bac2d563b Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 1 Sep 2023 15:57:59 +0100 Subject: [PATCH 047/103] Update glue-setup.md fix typos --- .../core/connect-data-platform/glue-setup.md | 44 +++++++++---------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/glue-setup.md b/website/docs/docs/core/connect-data-platform/glue-setup.md index 99d40db5c7a..e56e5bcd902 100644 --- a/website/docs/docs/core/connect-data-platform/glue-setup.md +++ b/website/docs/docs/core/connect-data-platform/glue-setup.md @@ -58,14 +58,14 @@ For further (and more likely up-to-date) info, see the [README](https://github.c ### Configuring your AWS profile for Glue Interactive Session There are two IAM principals used with interactive sessions. -- Client principal: The princpal (either user or role) calling the AWS APIs (Glue, Lake Formation, Interactive Sessions) -from the local client. This is the principal configured in the AWS CLI and likely the same. +- Client principal: The principal (either user or role) calling the AWS APIs (Glue, Lake Formation, Interactive Sessions) +from the local client. This is the principal configured in the AWS CLI and is likely the same. - Service role: The IAM role that AWS Glue uses to execute your session. This is the same as AWS Glue ETL. Read [this documentation](https://docs.aws.amazon.com/glue/latest/dg/glue-is-security.html) to configure these principals. -You will find bellow a least privileged policy to enjoy all features of **`dbt-glue`** adapter. +You will find below a least privileged policy to enjoy all features of **`dbt-glue`** adapter. Please to update variables between **`<>`**, here are explanations of these arguments: @@ -198,7 +198,7 @@ Please to update variables between **`<>`**, here are explanations of these argu ### Configuration of the local environment -Because **`dbt`** and **`dbt-glue`** adapter are compatible with Python versions 3.7, 3.8, and 3.9, check the version of Python: +Because **`dbt`** and **`dbt-glue`** adapters are compatible with Python versions 3.7, 3.8, and 3.9, check the version of Python: ```bash $ python3 --version @@ -260,27 +260,27 @@ The table below describes all the options. | type | The driver to use. | yes | | query-comment | A string to inject as a comment in each query that dbt runs. | no | | role_arn | The ARN of the glue interactive session IAM role. | yes | -| region | The AWS Region were you run the data pipeline. | yes | +| region | The AWS Region where you run the data pipeline. | yes | | workers | The number of workers of a defined workerType that are allocated when a job runs. | yes | | worker_type | The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X. | yes | | schema | The schema used to organize data stored in Amazon S3.Additionally, is the database in AWS Lake Formation that stores metadata tables in the Data Catalog. | yes | | session_provisioning_timeout_in_seconds | The timeout in seconds for AWS Glue interactive session provisioning. | yes | | location | The Amazon S3 location of your target data. | yes | -| query_timeout_in_minutes | The timeout in minutes for a signle query. Default is 300 | no | +| query_timeout_in_minutes | The timeout in minutes for a single query. Default is 300 | no | | idle_timeout | The AWS Glue session idle timeout in minutes. (The session stops after being idle for the specified amount of time) | no | | glue_version | The version of AWS Glue for this session to use. Currently, the only valid options are 2.0 and 3.0. The default value is 3.0. | no | | security_configuration | The security configuration to use with this session. | no | | connections | A comma-separated list of connections to use in the session. | no | | conf | Specific configuration used at the startup of the Glue Interactive Session (arg --conf) | no | | extra_py_files | Extra python Libs that can be used by the interactive session. | no | -| delta_athena_prefix | A prefix used to create Athena compatible tables for Delta tables (if not specified, then no Athena compatible table will be created) | no | -| tags | The map of key value pairs (tags) belonging to the session. Ex: `KeyName1=Value1,KeyName2=Value2` | no | +| delta_athena_prefix | A prefix used to create Athena-compatible tables for Delta tables (if not specified, then no Athena-compatible table will be created) | no | +| tags | The map of key-value pairs (tags) belonging to the session. Ex: `KeyName1=Value1,KeyName2=Value2` | no | | seed_format | By default `parquet`, can be Spark format compatible like `csv` or `json` | no | | seed_mode | By default `overwrite`, the seed data will be overwritten, you can set it to `append` if you just want to add new data in your dataset | no | -| default_arguments | The map of key value pairs parameters belonging to the session. More information on [Job parameters used by AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html). Ex: `--enable-continuous-cloudwatch-log=true,--enable-continuous-log-filter=true` | no | +| default_arguments | The map of key-value pairs parameters belonging to the session. More information on [Job parameters used by AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html). Ex: `--enable-continuous-cloudwatch-log=true,--enable-continuous-log-filter=true` | no | | glue_session_id | re-use the glue-session to run multiple dbt run commands: set a glue session id you need to use | no | -| glue_session_reuse | re-use the glue-session to run multiple dbt run commands: If set to true, the glue session will not be closed for re-use. If set to false, the session will be closed | no | -| datalake_formats | The ACID datalake format that you want to use if you are doing merge, can be `hudi`, `Γ¬ceberg` or `delta` |no| +| glue_session_reuse | Reuse the glue-session to run multiple dbt run commands: If set to true, the glue session will not be closed for re-use. If set to false, the session will be closed | no | +| datalake_formats | The ACID data lake format that you want to use if you are doing merge, can be `hudi`, `Γ¬ceberg` or `delta` |no| ## Configs @@ -303,7 +303,7 @@ dbt seeks to offer useful and intuitive modeling abstractions by means of its bu For that reason, the dbt-glue plugin leans heavily on the [`incremental_strategy` config](/docs/build/incremental-models). This config tells the incremental materialization how to build models in runs beyond their first. It can be set to one of three values: - **`append`** (default): Insert new records without updating or overwriting any existing data. - **`insert_overwrite`**: If `partition_by` is specified, overwrite partitions in the table with new data. If no `partition_by` is specified, overwrite the entire table with new data. - - **`merge`** (Apache Hudi and Apache Iceberg only): Match records based on a `unique_key`; update old records, insert new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.) + - **`merge`** (Apache Hudi and Apache Iceberg only): Match records based on a `unique_key`; update old records, and insert new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.) Each of these strategies has its pros and cons, which we'll discuss below. As with any model config, `incremental_strategy` may be specified in `dbt_project.yml` or within a model file's `config()` block. @@ -346,7 +346,7 @@ insert into table analytics.spark_incremental This strategy is most effective when specified alongside a `partition_by` clause in your model config. dbt will run an [atomic `insert overwrite` statement](https://spark.apache.org/docs/latest/sql-ref-syntax-dml-insert-overwrite-table.html) that dynamically replaces all partitions included in your query. Be sure to re-select _all_ of the relevant data for a partition when using this incremental strategy. -If no `partition_by` is specified, then the `insert_overwrite` strategy will atomically replace all contents of the table, overriding all existing data with only the new records. The column schema of the table remains the same, however. This can be desirable in some limited circumstances, since it minimizes downtime while the table contents are overwritten. The operation is comparable to running `truncate` + `insert` on other databases. For atomic replacement of Delta-formatted tables, use the `table` materialization (which runs `create or replace`) instead. +If no `partition_by` is specified, then the `insert_overwrite` strategy will atomically replace all contents of the table, overriding all existing data with only the new records. The column schema of the table remains the same, however. This can be desirable in some limited circumstances since it minimizes downtime while the table contents are overwritten. The operation is comparable to running `truncate` + `insert` on other databases. For atomic replacement of Delta-formatted tables, use the `table` materialization (which runs `create or replace`) instead. #### Source Code ```sql @@ -408,7 +408,7 @@ insert overwrite table analytics.spark_incremental select `date_day`, `users` from spark_incremental__dbt_tmp ``` -Specifying `insert_overwrite` as the incremental strategy is optional, since it's the default strategy used when none is specified. +Specifying `insert_overwrite` as the incremental strategy is optional since it's the default strategy used when none is specified. ### The `merge` strategy @@ -420,7 +420,7 @@ Specifying `insert_overwrite` as the incremental strategy is optional, since it' NB: -- For Glue 3: you have to setup a [Glue connectors](https://docs.aws.amazon.com/glue/latest/ug/connectors-chapter.html). +- For Glue 3: you have to set up a [Glue connectors](https://docs.aws.amazon.com/glue/latest/ug/connectors-chapter.html). - For Glue 4: use the `datalake_formats` option in your profile.yml @@ -443,17 +443,17 @@ and that the managed policy `AmazonEC2ContainerRegistryReadOnly` is attached. Be sure that you follow the getting started instructions [here](https://docs.aws.amazon.com/glue/latest/ug/setting-up.html#getting-started-min-privs-connectors). -This [blog post](https://aws.amazon.com/blogs/big-data/part-1-integrate-apache-hudi-delta-lake-apache-iceberg-datasets-at-scale-aws-glue-studio-notebook/) also explain how to setup and works with Glue Connectors +This [blog post](https://aws.amazon.com/blogs/big-data/part-1-integrate-apache-hudi-delta-lake-apache-iceberg-datasets-at-scale-aws-glue-studio-notebook/) also explains how to set up and works with Glue Connectors #### Hudi **Usage notes:** The `merge` with Hudi incremental strategy requires: - To add `file_format: hudi` in your table configuration - To add a datalake_formats in your profile : `datalake_formats: hudi` - - Alternatively, to add a connections in your profile : `connections: name_of_your_hudi_connector` + - Alternatively, to add a connection in your profile: `connections: name_of_your_hudi_connector` - To add Kryo serializer in your Interactive Session Config (in your profile): `conf: spark.serializer=org.apache.spark.serializer.KryoSerializer --conf spark.sql.hive.convertMetastoreParquet=false` -dbt will run an [atomic `merge` statement](https://hudi.apache.org/docs/writing_data#spark-datasource-writer) which looks nearly identical to the default merge behavior on Snowflake and BigQuery. If a `unique_key` is specified (recommended), dbt will update old records with values from new records that match on the key column. If a `unique_key` is not specified, dbt will forgo match criteria and simply insert all new records (similar to `append` strategy). +dbt will run an [atomic `merge` statement](https://hudi.apache.org/docs/writing_data#spark-datasource-writer) which looks nearly identical to the default merge behavior on Snowflake and BigQuery. If a `unique_key` is specified (recommended), dbt will update old records with values from new records that match the key column. If a `unique_key` is not specified, dbt will forgo match criteria and simply insert all new records (similar to `append` strategy). #### Profile config example ```yaml @@ -512,14 +512,14 @@ You can also use Delta Lake to be able to use merge feature on tables. **Usage notes:** The `merge` with Delta incremental strategy requires: - To add `file_format: delta` in your table configuration - To add a datalake_formats in your profile : `datalake_formats: delta` - - Alternatively, to add a connections in your profile : `connections: name_of_your_delta_connector` + - Alternatively, to add a connection in your profile: `connections: name_of_your_delta_connector` - To add the following config in your Interactive Session Config (in your profile): `conf: "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog` **Athena:** Athena is not compatible by default with delta tables, but you can configure the adapter to create Athena tables on top of your delta table. To do so, you need to configure the two following options in your profile: - For Delta Lake 2.1.0 supported natively in Glue 4.0: `extra_py_files: "/opt/aws_glue_connectors/selected/datalake/delta-core_2.12-2.1.0.jar"` - For Delta Lake 1.0.0 supported natively in Glue 3.0: `extra_py_files: "/opt/aws_glue_connectors/selected/datalake/delta-core_2.12-1.0.0.jar"` - `delta_athena_prefix: "the_prefix_of_your_choice"` -- If your table is partitioned, then the add of new partition is not automatic, you need to perform an `MSCK REPAIR TABLE your_delta_table` after each new partition adding +- If your table is partitioned, then the addition of new partition is not automatic, you need to perform an `MSCK REPAIR TABLE your_delta_table` after each new partition adding #### Profile config example ```yaml @@ -576,7 +576,7 @@ group by 1 **Usage notes:** The `merge` with Iceberg incremental strategy requires: - To attach the AmazonEC2ContainerRegistryReadOnly Manged policy to your execution role : -- To add the following policy to your execution role to enable commit locking in a dynamodb table (more info [here](https://iceberg.apache.org/docs/latest/aws/#dynamodb-lock-manager)). Note that the DynamoDB table specified in the ressource field of this policy should be the one that is mentionned in your dbt profiles (`--conf spark.sql.catalog.glue_catalog.lock.table=myGlueLockTable`). By default, this table is named `myGlueLockTable` and is created automatically (with On-Demand Pricing) when running a dbt-glue model with Incremental Materialization and Iceberg file format. If you want to name the table differently or to create your own table without letting Glue do it on your behalf, please provide the `iceberg_glue_commit_lock_table` parameter with your table name (eg. `MyDynamoDbTable`) in your dbt profile. +- To add the following policy to your execution role to enable commit locking in a dynamodb table (more info [here](https://iceberg.apache.org/docs/latest/aws/#dynamodb-lock-manager)). Note that the DynamoDB table specified in the resource field of this policy should be the one that is mentioned in your dbt profiles (`--conf spark.sql.catalog.glue_catalog.lock.table=myGlueLockTable`). By default, this table is named `myGlueLockTable` and is created automatically (with On-Demand Pricing) when running a dbt-glue model with Incremental Materialization and Iceberg file format. If you want to name the table differently or to create your own table without letting Glue do it on your behalf, please provide the `iceberg_glue_commit_lock_table` parameter with your table name (eg. `MyDynamoDbTable`) in your dbt profile. ```yaml iceberg_glue_commit_lock_table: "MyDynamoDbTable" ``` @@ -610,7 +610,7 @@ Make sure you update your conf with `--conf spark.sql.catalog.glue_catalog.lock. ``` - To add `file_format: Iceberg` in your table configuration - To add a datalake_formats in your profile : `datalake_formats: iceberg` - - Alternatively, to add a connections in your profile : `connections: name_of_your_iceberg_connector` ( + - Alternatively, to add connections in your profile: `connections: name_of_your_iceberg_connector` ( - For Athena version 3: - The adapter is compatible with the Iceberg Connector from AWS Marketplace with Glue 3.0 as Fulfillment option and 0.14.0 (Oct 11, 2022) as Software version) - the latest connector for iceberg in AWS marketplace uses Ver 0.14.0 for Glue 3.0, and Ver 1.2.1 for Glue 4.0 where Kryo serialization fails when writing iceberg, use "org.apache.spark.serializer.JavaSerializer" for spark.serializer instead, more info [here](https://github.com/apache/iceberg/pull/546) From 978b76555d949254e42f039f9bd785e5939cd604 Mon Sep 17 00:00:00 2001 From: tromsky <100031072+tromsky@users.noreply.github.com> Date: Sat, 2 Sep 2023 06:04:28 -0400 Subject: [PATCH 048/103] Update pip install instructions in semantic-layer-2-setup.md Adding double-quotes around the module name and extra information in installation command to help guarantee success --- .../how-we-build-our-metrics/semantic-layer-2-setup.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md b/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md index 7861767e25d..34c0e813725 100644 --- a/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md +++ b/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md @@ -23,8 +23,8 @@ We'll use pip to install MetricFlow and our dbt adapter: python -m venv [virtual environment name] source [virtual environment name]/bin/activate # install dbt and MetricFlow -pip install dbt-metricflow[adapter name] -# e.g. dbt-metricflow[snowflake] +pip install "dbt-metricflow[adapter name]" +# e.g. pip install "dbt-metricflow[snowflake]" ``` Lastly, to get to the pre-Semantic Layer starting state, checkout the `start-here` branch. From 963aa86d43dd40191611bdcbdcd3a6754ae39fef Mon Sep 17 00:00:00 2001 From: tromsky <100031072+tromsky@users.noreply.github.com> Date: Sat, 2 Sep 2023 11:05:23 -0400 Subject: [PATCH 049/103] Update SL-MF tutorial for proper expr usage and BigQuery Using an expression in a dimension name is not valid, and the expression is not valid for BigQuery --- .../semantic-layer-3-build-semantic-models.md | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md b/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md index 2c2122572b8..73fa2363aaf 100644 --- a/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md +++ b/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md @@ -148,7 +148,9 @@ from source ```YAML dimensions: - - name: date_trunc('day', ordered_at) + - name: ordered_at + expr: date_trunc('day', ordered_at) + # use date_trunc(ordered_at, DAY) if using BigQuery type: time type_params: time_granularity: day @@ -166,7 +168,9 @@ We'll discuss an alternate situation, dimensional tables that have static numeri ```YAML ... dimensions: - - name: date_trunc('day', ordered_at) + - name: ordered_at + expr: date_trunc('day', ordered_at) + # use date_trunc(ordered_at, DAY) if using BigQuery type: time type_params: time_granularity: day @@ -254,6 +258,8 @@ semantic_models: dimensions: - name: ordered_at + expr: date_trunc('day', ordered_at) + # use date_trunc(ordered_at, DAY) if using BigQuery type: time type_params: time_granularity: day From 48eff7482a16c9210e47f23f65051234afec40b5 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 4 Sep 2023 16:00:28 +0100 Subject: [PATCH 050/103] Update website/docs/reference/commands/rpc.md --- website/docs/reference/commands/rpc.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/website/docs/reference/commands/rpc.md b/website/docs/reference/commands/rpc.md index cf9fc57194f..0f61ddec9ae 100644 --- a/website/docs/reference/commands/rpc.md +++ b/website/docs/reference/commands/rpc.md @@ -16,7 +16,9 @@ description: "Remote Procedure Call (rpc) dbt server compiles and runs queries, **The dbt-rpc plugin is deprecated.** -dbt Labs actively maintained `dbt-rpc` for compatibility with dbt-core versions up to v1.5. Starting with dbt-core v1.6 (released in July 2023), `dbt-rpc` is no longer supported for ongoing compatibility. In the meantime, dbt Labs will be performing critical maintenance only for `dbt-rpc`, until the last compatible version of dbt-core has reached the end of official support (see [version policies](/docs/dbt-versions/core)). At that point, dbt Labs will archive this repository to be read-only. +dbt Labs actively maintained `dbt-rpc` for compatibility with dbt-core versions up to v1.5. Starting with [dbt-core v1.6 (released in July 2023), `dbt-rpc` is no longer supported for ongoing compatibility. + +In the meantime, dbt Labs will be performing critical maintenance only for `dbt-rpc`, until the last compatible version of dbt-core has reached the [end of official support](/docs/dbt-versions/core#latest-releases). At that point, dbt Labs will archive this repository to be read-only. ::: From 97a480b273a75a7007ca38b9a2b953946b4e9b92 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 4 Sep 2023 16:00:33 +0100 Subject: [PATCH 051/103] Update website/docs/reference/commands/rpc.md --- website/docs/reference/commands/rpc.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/commands/rpc.md b/website/docs/reference/commands/rpc.md index 0f61ddec9ae..7b39b3bef20 100644 --- a/website/docs/reference/commands/rpc.md +++ b/website/docs/reference/commands/rpc.md @@ -12,7 +12,7 @@ description: "Remote Procedure Call (rpc) dbt server compiles and runs queries, -:::caution Deprecation +:::caution The dbt-rpc plugin is deprecated **The dbt-rpc plugin is deprecated.** From 7b06fdbc4ab7b8a776c00eb3186c25ee89f4657f Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 4 Sep 2023 16:05:53 +0100 Subject: [PATCH 052/103] Update website/docs/reference/commands/rpc.md --- website/docs/reference/commands/rpc.md | 1 - 1 file changed, 1 deletion(-) diff --git a/website/docs/reference/commands/rpc.md b/website/docs/reference/commands/rpc.md index 7b39b3bef20..af7d3312e0c 100644 --- a/website/docs/reference/commands/rpc.md +++ b/website/docs/reference/commands/rpc.md @@ -14,7 +14,6 @@ description: "Remote Procedure Call (rpc) dbt server compiles and runs queries, :::caution The dbt-rpc plugin is deprecated -**The dbt-rpc plugin is deprecated.** dbt Labs actively maintained `dbt-rpc` for compatibility with dbt-core versions up to v1.5. Starting with [dbt-core v1.6 (released in July 2023), `dbt-rpc` is no longer supported for ongoing compatibility. From e2937d4a95d1c5f0cc39066e316dbea502afe66f Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 4 Sep 2023 16:06:13 +0100 Subject: [PATCH 053/103] Update website/docs/reference/commands/rpc.md --- website/docs/reference/commands/rpc.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/commands/rpc.md b/website/docs/reference/commands/rpc.md index af7d3312e0c..2b9a96688de 100644 --- a/website/docs/reference/commands/rpc.md +++ b/website/docs/reference/commands/rpc.md @@ -15,7 +15,7 @@ description: "Remote Procedure Call (rpc) dbt server compiles and runs queries, :::caution The dbt-rpc plugin is deprecated -dbt Labs actively maintained `dbt-rpc` for compatibility with dbt-core versions up to v1.5. Starting with [dbt-core v1.6 (released in July 2023), `dbt-rpc` is no longer supported for ongoing compatibility. +dbt Labs actively maintained `dbt-rpc` for compatibility with dbt-core versions up to v1.5. Starting with dbt-core v1.6 (released in July 2023), `dbt-rpc` is no longer supported for ongoing compatibility. In the meantime, dbt Labs will be performing critical maintenance only for `dbt-rpc`, until the last compatible version of dbt-core has reached the [end of official support](/docs/dbt-versions/core#latest-releases). At that point, dbt Labs will archive this repository to be read-only. From 593b83a38f5a76edd92212e980d9dbe81e7788c9 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 5 Sep 2023 14:47:21 +0100 Subject: [PATCH 054/103] Update website/docs/terms/monotonically-increasing.md --- website/docs/terms/monotonically-increasing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/terms/monotonically-increasing.md b/website/docs/terms/monotonically-increasing.md index 6d1264237ab..4317d39b2fa 100644 --- a/website/docs/terms/monotonically-increasing.md +++ b/website/docs/terms/monotonically-increasing.md @@ -1,7 +1,7 @@ --- id: monotonically-increasing title: Monotonically increasing -description: A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. +description: A monotonically increasing sequence is a sequence whose values are sorted in ascending order and do not decrease. For example, the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. displayText: monotonically increasing hoverSnippet: A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. --- From 9d4a4b9bc7f4779119efb9045929b35cdb811e6c Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 5 Sep 2023 14:47:53 +0100 Subject: [PATCH 055/103] Update website/docs/terms/monotonically-increasing.md --- website/docs/terms/monotonically-increasing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/terms/monotonically-increasing.md b/website/docs/terms/monotonically-increasing.md index 4317d39b2fa..c7455572850 100644 --- a/website/docs/terms/monotonically-increasing.md +++ b/website/docs/terms/monotonically-increasing.md @@ -3,7 +3,7 @@ id: monotonically-increasing title: Monotonically increasing description: A monotonically increasing sequence is a sequence whose values are sorted in ascending order and do not decrease. For example, the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. displayText: monotonically increasing -hoverSnippet: A monotonically-increasing sequence is a sequence whose values do not decrease, for example the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. +hoverSnippet: A monotonically-increasing sequence is a sequence whose values are sorted in ascending order and do not decrease. For example, the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. --- Monotonicity means unchanging (think monotone); a monotonic sequence is a sequence where the order of the value of the elements does not change. In other words, a monotonically-increasing sequence is a sequence whose values do not decrease. For example the sequences `[1, 6, 7, 11, 131]` or `[2, 5, 5, 5, 6, 10]`. From e9f231ada0da484efb59b849643d7d34dc27c61f Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 5 Sep 2023 14:48:01 +0100 Subject: [PATCH 056/103] Update website/docs/terms/monotonically-increasing.md --- website/docs/terms/monotonically-increasing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/terms/monotonically-increasing.md b/website/docs/terms/monotonically-increasing.md index c7455572850..b4e3987995d 100644 --- a/website/docs/terms/monotonically-increasing.md +++ b/website/docs/terms/monotonically-increasing.md @@ -6,6 +6,6 @@ displayText: monotonically increasing hoverSnippet: A monotonically-increasing sequence is a sequence whose values are sorted in ascending order and do not decrease. For example, the sequences 1, 6, 7, 11, 131 or 2, 5, 5, 5, 6, 10. --- -Monotonicity means unchanging (think monotone); a monotonic sequence is a sequence where the order of the value of the elements does not change. In other words, a monotonically-increasing sequence is a sequence whose values do not decrease. For example the sequences `[1, 6, 7, 11, 131]` or `[2, 5, 5, 5, 6, 10]`. +Monotonicity means unchanging (think monotone); a monotonic sequence is a sequence where the order of the value of the elements does not change. In other words, a monotonically-increasing sequence is a sequence whose values are sorted in ascending order and do not decrease. For example the sequences `[1, 6, 7, 11, 131]` or `[2, 5, 5, 5, 6, 10]`.. Monotonically-increasing values often appear in primary keys generated by production systems. In an analytics engineering context, you should avoid generating such values or assuming their existence in your models, because they make it more difficult to create an data model. Instead you should create a which is derived from the unique component(s) of a row. From 0eb4521d3bc44684d5f06be18b7275f3dfcb81d4 Mon Sep 17 00:00:00 2001 From: Emily Rockman Date: Tue, 5 Sep 2023 14:30:37 -0500 Subject: [PATCH 057/103] create workflow to label autogenerated issues from core repos (#4018) ## What are you changing in this pull request and why? Creating a new workflow to label autogenerate issues from the core repo. Workflows need to live on the default branch. ## Checklist - [ ] ~~Review the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) and [About versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) so my content adheres to these guidelines.~~ not applicable - [ ] Add a checklist item for anything that needs to happen before this PR is merged, such as "needs technical review" or "change base branch." Needs technical review since this is for label automation and is unrelated to docs. Relates to https://github.com/dbt-labs/dbt-core/issues/8439 --- .github/workflows/autogenerated_labeler.yml | 40 +++++++++++++++++++++ 1 file changed, 40 insertions(+) create mode 100644 .github/workflows/autogenerated_labeler.yml diff --git a/.github/workflows/autogenerated_labeler.yml b/.github/workflows/autogenerated_labeler.yml new file mode 100644 index 00000000000..4aa41b7f0d4 --- /dev/null +++ b/.github/workflows/autogenerated_labeler.yml @@ -0,0 +1,40 @@ +# **what?** +# Labels issues autogenerated in dbt-core + +# **why?** +# To organize autogenerated issues from dbt-core to make it easier to find and track them. + +# **when?** +# When an issue is opened by the FishtownBuildBot + +name: Add Labels to Autogenerated Issues + +on: + issues: + types: [opened] + +jobs: + add_customized_labels: + if: github.event.issue.user.login == 'FishtownBuildBot' + permissions: + issues: write + + runs-on: ubuntu-latest + steps: + - name: "Determine appropriate labels by repo in title" + id: repo + env: + ISSUE_TITLE: ${{ github.event.issue.title }} + run: | + if [[ "$ISSUE_TITLE" == *"dbt-core"* ]]; then + echo "labels='content,improvement,dbt Core'" >> $GITHUB_OUTPUT + else + echo "labels='content,improvement,adapters'" >> $GITHUB_OUTPUT + fi + + - name: Add Labels to autogenerated Issues + id: add-labels + run: | + gh issue edit ${{ github.event.issue.number }} --add-label "${{ steps.repo.outputs.labels }}" + env: + GITHUB_TOKEN: ${{ secrets.FISHTOWN_BOT_PAT }} From 7abd5d3ed17494fbfa6f5c0091612cd9fcd26d87 Mon Sep 17 00:00:00 2001 From: Emily Rockman Date: Tue, 5 Sep 2023 14:52:46 -0500 Subject: [PATCH 058/103] swap out token and token name (#4020) ## What are you changing in this pull request and why? Fixing the token name so the workflow stops failing. ## Checklist - [ ] ~~Review the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) and [About versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) so my content adheres to these guidelines.~~ n/a - [ ] Add a checklist item for anything that needs to happen before this PR is merged, such as "needs technical review" or "change base branch." Needs technical review --- .github/workflows/autogenerated_labeler.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/autogenerated_labeler.yml b/.github/workflows/autogenerated_labeler.yml index 4aa41b7f0d4..d72601c069a 100644 --- a/.github/workflows/autogenerated_labeler.yml +++ b/.github/workflows/autogenerated_labeler.yml @@ -37,4 +37,4 @@ jobs: run: | gh issue edit ${{ github.event.issue.number }} --add-label "${{ steps.repo.outputs.labels }}" env: - GITHUB_TOKEN: ${{ secrets.FISHTOWN_BOT_PAT }} + GH_TOKEN: ${{ secrets.DOCS_SECRET }} From 12db4fbf8eb4ce0db954ccf9ed53b4876f96678d Mon Sep 17 00:00:00 2001 From: David Thorn Date: Tue, 5 Sep 2023 15:18:33 -0700 Subject: [PATCH 059/103] Update deploy-jobs.md for self-deferral The CI Phase 2 beta will now include the ability for Deploy jobs to self-defer. This will be useful for jobs running compile, source_freshness:fresher, and other commands that need to defer to the last successful state of that particular job. Feel free to suggest other language as well, this is not perfect. --- website/docs/docs/deploy/deploy-jobs.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/deploy/deploy-jobs.md b/website/docs/docs/deploy/deploy-jobs.md index 3d754beb609..5287bfd3109 100644 --- a/website/docs/docs/deploy/deploy-jobs.md +++ b/website/docs/docs/deploy/deploy-jobs.md @@ -90,7 +90,7 @@ If you're interested in joining our beta, please fill out our Google Form to [si - **Environment Variables** — Define [environment variables](/docs/build/environment-variables) to customize the behavior of your project when the deploy job runs. - **Target Name** — Define theΒ [target name](/docs/build/custom-target-names) to customize the behavior of your project when the deploy job runs. Environment variables and target names are often used interchangeably. - **Run Timeout** — Cancel the deploy job if the run time exceeds the timeout value. - - **Compare changes against an environment (Deferral)** option β€” By default, it’s set to **No deferral**. + - **Compare changes against ** option β€” By default, it’s set to **No deferral**. For Deploy jobs, you can select either no deferral, deferral to an environment, or self defer (to the same job). :::info Older versions of dbt Cloud only allow you to defer to a specific job instead of an environment. Deferral to a job compares state against the project code that was run in the deferred job's last successful run. While deferral to an environment is more efficient as dbt Cloud will compare against the project representation (which is stored in the `manifest.json`) of the last successful deploy job run that executed in the deferred environment. By considering _all_ deploy jobs that run in the deferred environment, dbt Cloud will get a more accurate, latest project representation state. @@ -148,4 +148,4 @@ Refer to the following example snippets: - [Artifacts](/docs/deploy/artifacts) - [Continuous integration (CI) jobs](/docs/deploy/ci-jobs) -- [Webhooks](/docs/deploy/webhooks) \ No newline at end of file +- [Webhooks](/docs/deploy/webhooks) From 5608123d7c015769551f7187cdebeea0b74c7ece Mon Sep 17 00:00:00 2001 From: Emily Rockman Date: Tue, 5 Sep 2023 19:23:05 -0500 Subject: [PATCH 060/103] add repo and remove extra quotation marks (#4024) ## What are you changing in this pull request and why? Fixing a run error where the repo was missing because we never check ti out. Now sending in the repo as an arg to the update call. ## Checklist - [ ] ~~Review the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) and [About versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) so my content adheres to these guidelines.~~ n/a - [ ] Add a checklist item for anything that needs to happen before this PR is merged, such as "needs technical review" or "change base branch." Needs technical review --- .github/workflows/autogenerated_labeler.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/autogenerated_labeler.yml b/.github/workflows/autogenerated_labeler.yml index d72601c069a..e6aab0492b8 100644 --- a/.github/workflows/autogenerated_labeler.yml +++ b/.github/workflows/autogenerated_labeler.yml @@ -32,9 +32,9 @@ jobs: echo "labels='content,improvement,adapters'" >> $GITHUB_OUTPUT fi - - name: Add Labels to autogenerated Issues + - name: "Add Labels to autogenerated Issues" id: add-labels run: | - gh issue edit ${{ github.event.issue.number }} --add-label "${{ steps.repo.outputs.labels }}" + gh issue edit ${{ github.event.issue.number }} --repo ${{ github.repository }} --add-label ${{ steps.repo.outputs.labels }} env: GH_TOKEN: ${{ secrets.DOCS_SECRET }} From c13183772a3fc950c7d5e2aaf4a77a7f54a19291 Mon Sep 17 00:00:00 2001 From: Henri-Maxime Ducoulombier Date: Wed, 6 Sep 2023 15:06:29 +0200 Subject: [PATCH 061/103] Update bigquery-setup.md Fix BigQuery name (no space) --- website/docs/docs/core/connect-data-platform/bigquery-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/bigquery-setup.md b/website/docs/docs/core/connect-data-platform/bigquery-setup.md index 80150917212..ad056ab46b1 100644 --- a/website/docs/docs/core/connect-data-platform/bigquery-setup.md +++ b/website/docs/docs/core/connect-data-platform/bigquery-setup.md @@ -11,7 +11,7 @@ meta: min_supported_version: 'n/a' slack_channel_name: '#db-bigquery' slack_channel_link: 'https://getdbt.slack.com/archives/C99SNSRTK' - platform_name: 'Big Query' + platform_name: 'BigQuery' config_page: '/reference/resource-configs/bigquery-configs' --- From df2165cd937dbf0b8b23317e45c16115942cb3c7 Mon Sep 17 00:00:00 2001 From: Jeremy Cohen Date: Wed, 6 Sep 2023 15:10:37 +0200 Subject: [PATCH 062/103] Hotfix: replace --skip-populate-cache with --no-populate-cache --- website/docs/reference/global-configs/cache.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/reference/global-configs/cache.md b/website/docs/reference/global-configs/cache.md index db4eabd14b7..6157e1a3bfb 100644 --- a/website/docs/reference/global-configs/cache.md +++ b/website/docs/reference/global-configs/cache.md @@ -17,7 +17,7 @@ There are two ways to optionally modify this behavior: For example, to quickly compile a model that requires no database metadata or introspective queries: ```text -dbt --skip-populate-cache compile --select my_model_name +dbt --no-populate-cache compile --select my_model_name ``` @@ -63,4 +63,4 @@ config: - \ No newline at end of file + From 95c476614a922b48a4c5bdb8d315ba6dfc1918e4 Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Wed, 6 Sep 2023 11:43:13 -0400 Subject: [PATCH 063/103] athena is trusted --- website/snippets/_adapters-trusted.md | 11 +++-------- website/static/img/icons/athena.svg | 1 + website/static/img/icons/white/athena.svg | 1 + 3 files changed, 5 insertions(+), 8 deletions(-) create mode 100644 website/static/img/icons/athena.svg create mode 100644 website/static/img/icons/white/athena.svg diff --git a/website/snippets/_adapters-trusted.md b/website/snippets/_adapters-trusted.md index 7d961e62ee6..10af0218e22 100644 --- a/website/snippets/_adapters-trusted.md +++ b/website/snippets/_adapters-trusted.md @@ -1,13 +1,8 @@
- - + title="Athena*" + body="Install using the CLI

" + icon="athena"/>
diff --git a/website/static/img/icons/athena.svg b/website/static/img/icons/athena.svg new file mode 100644 index 00000000000..c2c6a81dd64 --- /dev/null +++ b/website/static/img/icons/athena.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/website/static/img/icons/white/athena.svg b/website/static/img/icons/white/athena.svg new file mode 100644 index 00000000000..c2c6a81dd64 --- /dev/null +++ b/website/static/img/icons/white/athena.svg @@ -0,0 +1 @@ + \ No newline at end of file From a5d1732ae694580d4576d78b28d29a95bec34b8e Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Wed, 6 Sep 2023 14:01:23 -0400 Subject: [PATCH 064/103] cleaner card layout --- website/docs/docs/trusted-adapters.md | 1 + 1 file changed, 1 insertion(+) diff --git a/website/docs/docs/trusted-adapters.md b/website/docs/docs/trusted-adapters.md index f3ff07467c3..e19bb40785f 100644 --- a/website/docs/docs/trusted-adapters.md +++ b/website/docs/docs/trusted-adapters.md @@ -1,6 +1,7 @@ --- title: "Trusted adapters" id: "trusted-adapters" +hide_table_of_contents: true --- Trusted adapters are adapters not maintained by dbt Labs, that we feel comfortable recommending to users for use in production. From eadc0cb1e73cf0654befc134dea231799bc3c94c Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Wed, 6 Sep 2023 14:33:45 -0400 Subject: [PATCH 065/103] clarify --- .../zzz_add-adapter-to-trusted-list.yml | 31 ++++++++++++++----- 1 file changed, 24 insertions(+), 7 deletions(-) diff --git a/.github/ISSUE_TEMPLATE/zzz_add-adapter-to-trusted-list.yml b/.github/ISSUE_TEMPLATE/zzz_add-adapter-to-trusted-list.yml index 30e47c86567..e19accf6ebb 100644 --- a/.github/ISSUE_TEMPLATE/zzz_add-adapter-to-trusted-list.yml +++ b/.github/ISSUE_TEMPLATE/zzz_add-adapter-to-trusted-list.yml @@ -1,12 +1,14 @@ name: Add adapter to Trusted list -description: > - For adapter maintainers who wish to have theirs added to the list of [Trusted adapters](https://docs.getdbt.com/docs/trusted-adapters) +description: For adapter maintainers who wish to have theirs added to the list of Trusted adapters. +title: "Trust dbt-myadapter" labels: ["adapter maintainers"] +assignees: + - dataders body: - type: markdown attributes: value: | - We're excited that you'd like to support your adapter formally as "Trusted"! This template will ensure that you are aware of the process and the guidelines. Additionally, that you can vouch that your adapter currently meets the standards of a Trusted adapter + We're excited that you'd like to support your adapter formally as "Trusted"! This template will ensure that you are aware of the process and the guidelines. Additionally, that you can vouch that your adapter currently meets the standards of a Trusted adapter. For more information, see [Trusted adapters](https://docs.getdbt.com/docs/trusted-adapters) - type: input id: adapter-repo @@ -25,13 +27,15 @@ body: validations: required: true - - type: checkboxes + - type: dropdown id: author_type attributes: label: Which of these best describes you? options: - - label: I am a dbt Community member - - label: I work for the vendor on top of which the dbt adapter functions + - I am a dbt Community member + - I work for the vendor on top of which the dbt adapter functions + validations: + required: true - type: checkboxes id: read-program-guide @@ -39,7 +43,20 @@ body: label: Please agree to the each of the following options: - label: I am a maintainer of the adapter being submited for Trusted status + required: true - label: I have read both the [Trusted adapters](https://docs.getdbt.com/docs/trusted-adapters) and [Building a Trusted Adapter](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter) pages. + required: true - label: I believe that the adapter currently meets the expectations given above - - label: I will ensure this adapter stays in compliance with the guidelines + required: true + - label: I will ensure this adapter stays in compliance with the guidelines + required: true - label: I understand that dbt Labs reserves the right to remove an adapter from the trusted adapter list at any time, should any of the below guidelines not be met + required: true + + - type: textarea + id: icon + attributes: + label: What icon should be used? + description: | + Please share an svg image that you'd like to be displayed in for your adapter. Normally, this is the logo for the data platform on top of which your adapter works. If there's a dark mode version, please also share that. + Pasting the image from your clipboard will upload the file to GitHub and create markdown formatting for it to be rendered inline From 40020c01287f6b577a05864fc5197b4a635cccf0 Mon Sep 17 00:00:00 2001 From: Anders Swanson Date: Wed, 6 Sep 2023 16:51:27 -0400 Subject: [PATCH 066/103] drop cloud-specifc requirements --- .../adapter-development/8-building-a-trusted-adapter.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/website/docs/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter.md b/website/docs/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter.md index b8cef0ea34a..9783ec66460 100644 --- a/website/docs/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter.md +++ b/website/docs/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter.md @@ -44,8 +44,6 @@ Trusted adapters will not do any of the following: - Output to logs or file either access credentials information to or data from the underlying data platform itself. - Make API calls other than those expressly required for using dbt features (adapters may not add additional logging) - Obfuscate code and/or functionality so as to avoid detection -- Use the Python runtime of dbt to execute arbitrary Python code -- Draw a dependency on dbt’s Python API beyond what is required for core data transformation functionality as described in the Essential and Extended feature tiers Additionally, to avoid supply-chain attacks: From 7629843b8d060ba4b9e5077a7076607dd0b2ade6 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 6 Sep 2023 22:32:10 +0100 Subject: [PATCH 067/103] breakdown cli quickstart steps (#4012) the core quickstart currently embeds multiple h3 steps into one h2. this makes the pages quite long and misrepresents the steps as well. After discussion with @runleonarun and feedback from @greg-mckeon , this pr breaks down the h3 steps into h2 to give users more bite-sized chunks of content. this pr also adds redirects from steps to anchored links. [docs project](https://www.notion.so/dbtlabs/Breakdown-Core-quickstart-into-steps-0f1d500635fa4d288371dc4a033cc4e4) --------- Co-authored-by: Leona B. Campbell <3880403+runleonarun@users.noreply.github.com> Co-authored-by: John Rock <46692803+john-rock@users.noreply.github.com> --- website/docs/quickstarts/manual-install-qs.md | 433 +++++++++--------- website/src/components/quickstartTOC/index.js | 20 +- .../dbt-cloud/deployment/run-overview.jpg | Bin 0 -> 101608 bytes 3 files changed, 230 insertions(+), 223 deletions(-) create mode 100644 website/static/img/docs/dbt-cloud/deployment/run-overview.jpg diff --git a/website/docs/quickstarts/manual-install-qs.md b/website/docs/quickstarts/manual-install-qs.md index ea3c6c7ec84..05336178ff6 100644 --- a/website/docs/quickstarts/manual-install-qs.md +++ b/website/docs/quickstarts/manual-install-qs.md @@ -18,11 +18,11 @@ When you use dbt Core to work with dbt, you will be editing files locally using * Complete [Setting up (in BigQuery)](/quickstarts/bigquery?step=2) and [Loading data (BigQuery)](/quickstarts/bigquery?step=3). * [Create a GitHub account](https://github.com/join) if you don't already have one. -## Create a starter project +### Create a starter project After setting up BigQuery to work with dbt, you are ready to create a starter project with example models, before building your own models. -### Create a repository +## Create a repository The following steps use [GitHub](https://github.com/) as the Git provider for this guide, but you can use any Git provider. You should have already [created a GitHub account](https://github.com/join). @@ -32,7 +32,7 @@ The following steps use [GitHub](https://github.com/) as the Git provider for th 4. Click **Create repository**. 5. Save the commands from "…or create a new repository on the command line" to use later in [Commit your changes](#commit-your-changes). -### Create a project +## Create a project Learn how to use a series of commands using the command line of the Terminal to create your project. dbt Core includes an `init` command that helps scaffold a dbt project. @@ -40,56 +40,56 @@ To create your dbt project: 1. Make sure you have dbt Core installed and check the version using the `dbt --version` command: - ```terminal - dbt --version - ``` +```shell +dbt --version +``` 2. Initiate the `jaffle_shop` project using the `init` command: - ```terminal - dbt init jaffle_shop - ``` +```shell +dbt init jaffle_shop +``` 3. Navigate into your project's directory: - ```terminal - cd jaffle_shop - ``` +```shell +cd jaffle_shop +``` 4. Use `pwd` to confirm that you are in the right spot: - ```terminal - $ pwd - > Users/BBaggins/dbt-tutorial/jaffle_shop - ``` +```shell +$ pwd +> Users/BBaggins/dbt-tutorial/jaffle_shop +``` 5. Use a code editor like Atom or VSCode to open the project directory you created in the previous steps, which we named jaffle_shop. The content includes folders and `.sql` and `.yml` files generated by the `init` command. -
- -
+
+ +
6. Update the following values in the `dbt_project.yml` file: - + - ```yaml - name: jaffle_shop # Change from the default, `my_new_project` +```yaml +name: jaffle_shop # Change from the default, `my_new_project` - ... +... - profile: jaffle_shop # Change from the default profile name, `default` +profile: jaffle_shop # Change from the default profile name, `default` - ... +... - models: - jaffle_shop: # Change from `my_new_project` to match the previous value for `name:` - ... - ``` +models: + jaffle_shop: # Change from `my_new_project` to match the previous value for `name:` + ... +``` - + -### Connect to BigQuery +## Connect to BigQuery When developing locally, dbt connects to your using a [profile](/docs/core/connect-data-platform/connection-profiles), which is a YAML file with all the connection details to your warehouse. @@ -97,38 +97,38 @@ When developing locally, dbt connects to your using 2. Move your BigQuery keyfile into this directory. 3. Copy the following and paste into the new profiles.yml file. Make sure you update the values where noted. - - - ```yaml - jaffle_shop: # this needs to match the profile in your dbt_project.yml file - target: dev - outputs: - dev: - type: bigquery - method: service-account - keyfile: /Users/BBaggins/.dbt/dbt-tutorial-project-331118.json # replace this with the full path to your keyfile - project: grand-highway-265418 # Replace this with your project id - dataset: dbt_bbagins # Replace this with dbt_your_name, e.g. dbt_bilbo - threads: 1 - timeout_seconds: 300 - location: US - priority: interactive - ``` - - + + +```yaml +jaffle_shop: # this needs to match the profile in your dbt_project.yml file + target: dev + outputs: + dev: + type: bigquery + method: service-account + keyfile: /Users/BBaggins/.dbt/dbt-tutorial-project-331118.json # replace this with the full path to your keyfile + project: grand-highway-265418 # Replace this with your project id + dataset: dbt_bbagins # Replace this with dbt_your_name, e.g. dbt_bilbo + threads: 1 + timeout_seconds: 300 + location: US + priority: interactive +``` + + 4. Run the `debug` command from your project to confirm that you can successfully connect: - ```terminal - $ dbt debug - > Connection test: OK connection ok - ``` +```shell +$ dbt debug +> Connection test: OK connection ok +``` -
- -
+
+ +
-#### FAQs +### FAQs @@ -136,69 +136,72 @@ When developing locally, dbt connects to your using -### Perform your first dbt run +## Perform your first dbt run Our sample project has some example models in it. We're going to check that we can run them to confirm everything is in order. 1. Enter the `run` command to build example models: - ```terminal - dbt run - ``` +```shell +dbt run +``` You should have an output that looks like this: +
-### Commit your changes +## Commit your changes Commit your changes so that the repository contains the latest code. 1. Link the GitHub repository you created to your dbt project by running the following commands in Terminal. Make sure you use the correct git URL for your repository, which you should have saved from step 5 in [Create a repository](#create-a-repository). - ```terminal - git init - git branch -M main - git add . - git commit -m "Create a dbt project" - git remote add origin https://github.com/USERNAME/dbt-tutorial.git - git push -u origin main - ``` +```shell +git init +git branch -M main +git add . +git commit -m "Create a dbt project" +git remote add origin https://github.com/USERNAME/dbt-tutorial.git +git push -u origin main +``` 2. Return to your GitHub repository to verify your new files have been added. -## Build your first models +### Build your first models -Now that you set up your sample project, you can get to the fun part β€” [building models](/docs/build/sql-models)! You will take a sample query and turn it into a model in your dbt project. +Now that you set up your sample project, you can get to the fun part β€” [building models](/docs/build/sql-models)! +In the next steps, you will take a sample query and turn it into a model in your dbt project. -### Checkout a new git branch +## Checkout a new git branch Check out a new git branch to work on new code: 1. Create a new branch by using the `checkout` command and passing the `-b` flag: - ```terminal - $ git checkout -b add-customers-model - > Switched to a new branch `add-customer-model` - ``` +```shell +$ git checkout -b add-customers-model +> Switched to a new branch `add-customer-model` +``` + +## Build your first model -### Build your first model 1. Open your project in your favorite code editor. 2. Create a new SQL file in the `models` directory, named `models/customers.sql`. 3. Paste the following query into the `models/customers.sql` file. - + 4. From the command line, enter `dbt run`. -
- -
+
+ +
When you return to the BigQuery console, you can `select` from this model. -#### FAQs +### FAQs @@ -206,210 +209,210 @@ When you return to the BigQuery console, you can `select` from this model. -### Change the way your model is materialized +## Change the way your model is materialized -### Delete the example models +## Delete the example models -### Build models on top of other models +## Build models on top of other models 1. Create a new SQL file, `models/stg_customers.sql`, with the SQL from the `customers` CTE in our original query. 2. Create a second new SQL file, `models/stg_orders.sql`, with the SQL from the `orders` CTE in our original query. - + -
+
- + - ```sql - select - id as customer_id, - first_name, - last_name +```sql +select + id as customer_id, + first_name, + last_name - from `dbt-tutorial`.jaffle_shop.customers - ``` +from `dbt-tutorial`.jaffle_shop.customers +``` - + - + - ```sql - select - id as order_id, - user_id as customer_id, - order_date, - status +```sql +select + id as order_id, + user_id as customer_id, + order_date, + status - from `dbt-tutorial`.jaffle_shop.orders - ``` +from `dbt-tutorial`.jaffle_shop.orders +``` - + -
+
-
+
- + - ```sql - select - id as customer_id, - first_name, - last_name +```sql +select + id as customer_id, + first_name, + last_name - from jaffle_shop_customers - ``` +from jaffle_shop_customers +``` - + - + - ```sql - select - id as order_id, - user_id as customer_id, - order_date, - status +```sql +select + id as order_id, + user_id as customer_id, + order_date, + status - from jaffle_shop_orders - ``` +from jaffle_shop_orders +``` - + -
+
-
+
- + - ```sql - select - id as customer_id, - first_name, - last_name +```sql +select + id as customer_id, + first_name, + last_name - from jaffle_shop.customers - ``` +from jaffle_shop.customers +``` - + - + - ```sql - select - id as order_id, - user_id as customer_id, - order_date, - status +```sql +select + id as order_id, + user_id as customer_id, + order_date, + status - from jaffle_shop.orders - ``` +from jaffle_shop.orders +``` - + -
+
-
+
- + - ```sql - select - id as customer_id, - first_name, - last_name +```sql +select + id as customer_id, + first_name, + last_name - from raw.jaffle_shop.customers - ``` +from raw.jaffle_shop.customers +``` - + - + - ```sql - select - id as order_id, - user_id as customer_id, - order_date, - status +```sql +select + id as order_id, + user_id as customer_id, + order_date, + status - from raw.jaffle_shop.orders - ``` +from raw.jaffle_shop.orders +``` - + -
+
-
+
3. Edit the SQL in your `models/customers.sql` file as follows: - + + +```sql +with customers as ( - ```sql - with customers as ( + select * from {{ ref('stg_customers') }} - select * from {{ ref('stg_customers') }} +), - ), +orders as ( - orders as ( + select * from {{ ref('stg_orders') }} - select * from {{ ref('stg_orders') }} +), - ), +customer_orders as ( - customer_orders as ( + select + customer_id, - select - customer_id, + min(order_date) as first_order_date, + max(order_date) as most_recent_order_date, + count(order_id) as number_of_orders - min(order_date) as first_order_date, - max(order_date) as most_recent_order_date, - count(order_id) as number_of_orders + from orders - from orders + group by 1 - group by 1 +), - ), +final as ( - final as ( + select + customers.customer_id, + customers.first_name, + customers.last_name, + customer_orders.first_order_date, + customer_orders.most_recent_order_date, + coalesce(customer_orders.number_of_orders, 0) as number_of_orders - select - customers.customer_id, - customers.first_name, - customers.last_name, - customer_orders.first_order_date, - customer_orders.most_recent_order_date, - coalesce(customer_orders.number_of_orders, 0) as number_of_orders + from customers - from customers + left join customer_orders using (customer_id) - left join customer_orders using (customer_id) +) - ) +select * from final - select * from final - - ``` +``` - + 4. Execute `dbt run`. - This time, when you performed a `dbt run`, separate views/tables were created for `stg_customers`, `stg_orders` and `customers`. dbt inferred the order to run these models. Because `customers` depends on `stg_customers` and `stg_orders`, dbt builds `customers` last. You do not need to explicitly define these dependencies. +This time, when you performed a `dbt run`, separate views/tables were created for `stg_customers`, `stg_orders` and `customers`. dbt inferred the order to run these models. Because `customers` depends on `stg_customers` and `stg_orders`, dbt builds `customers` last. You do not need to explicitly define these dependencies. -#### FAQs {#faq-2} +### FAQs {#faq-2} @@ -424,13 +427,11 @@ You can also explore: * The `target` directory to see all of the compiled SQL. The `run` directory shows the create or replace table statements that are running, which are the select statements wrapped in the correct DDL. * The `logs` file to see how dbt Core logs all of the action happening within your project. It shows the select statements that are running and the python logging happening when dbt runs. -## Test and document your project - -### Add tests to your models +## Add tests to your models -### Document your models +## Document your models @@ -446,7 +447,7 @@ You can also explore: -### Commit updated changes +## Commit updated changes You need to commit the changes you made to the project so that the repository has your latest code. @@ -457,4 +458,10 @@ You need to commit the changes you made to the project so that the repository ha ## Schedule a job -We recommend using dbt Cloud to schedule a job. For more information about using dbt Core to schedule a job, see [dbt airflow](/blog/dbt-airflow-spiritual-alignment) blog post or [deployments](/docs/deploy/deployments). +We recommend using dbt Cloud as the easiest and most reliable way to [deploy jobs](/docs/deploy/deployments) and automate your dbt project in production. + +For more info on how to get started, refer to [create and schedule jobs](/docs/deploy/deploy-jobs#create-and-schedule-jobs). + + + +For more information about using dbt Core to schedule a job, refer [dbt airflow](/blog/dbt-airflow-spiritual-alignment) blog post. diff --git a/website/src/components/quickstartTOC/index.js b/website/src/components/quickstartTOC/index.js index 49209273964..8c9b8fba910 100644 --- a/website/src/components/quickstartTOC/index.js +++ b/website/src/components/quickstartTOC/index.js @@ -26,16 +26,6 @@ function QuickstartTOC() { const steps = quickstartContainer.querySelectorAll("h2"); const snippetContainer = document.querySelectorAll(".snippet"); - // Add snippet container to its parent step - snippetContainer.forEach((snippet) => { - const parent = snippet?.parentNode; - while (snippet?.firstChild && parent.className) { - if (parent) { - parent.insertBefore(snippet.firstChild, snippet); - } - } - }); - // Create an array of objects with the id and title of each step const data = Array.from(steps).map((step, index) => ({ id: step.id, @@ -49,6 +39,16 @@ function QuickstartTOC() { // Wrap all h2 (steps), along with all of their direct siblings, in a div until the next h2 if (mounted) { + // Add snippet container to its parent step + snippetContainer.forEach((snippet) => { + const parent = snippet?.parentNode; + while (snippet?.firstChild && parent.className) { + if (parent) { + parent.insertBefore(snippet.firstChild, snippet); + } + } + }); + steps.forEach((step, index) => { const wrapper = document.createElement("div"); wrapper.classList.add(style.stepWrapper); diff --git a/website/static/img/docs/dbt-cloud/deployment/run-overview.jpg b/website/static/img/docs/dbt-cloud/deployment/run-overview.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8ab14b8ce2b5dca1216c44b06049f6972f4534ca GIT binary patch literal 101608 zcmdSBbyU>vwm*)Fq?CY2BS;HK2@KtWbT^80!vI5vh%`t`4Bai=jdXVo>CoK_GvE1~ zd(ORg-Q(x{`&++R>%HdT{q*aJz4vSHAyh?476*$A3k3xQNAA6p8VbtOdlZx>_!wx& zH{X@o`B6|FX<121s>n%7(x^B)m|NMJp`g4>aQ)8tp<9J0NZI;>ovY*c$g99F5=4rJ z8?$cTvn30|w3vS6|0IsRNs*FL$x3bS7?2i6XRHSRFyJWCoz$5nG^P{Jk8P%F!tmIy zHyu~Em$9WXxy+>={YbPS`CYYP?W7pdP7}c786%@GLy@Q&|FmkY86aOMCG!wyt?;=H8-uQAFlvZj{nL29N>6ZF$s=kYJG%G6 z)f-~x9=@I@dWvQE5W)}JxUGww`9I5i%+_}AU2QyKK*gU-IhC6WIDh5D<1P;E+2Noz}dtjJ0Ei)tGf!hCa+0 z+aT$Ob&2R4QA=TUOze{*o5w%tAb$FfHa5lfC$1r>Z4dyuXZlXXHDEBEPr=97P=pWy#h zP+VP7P7e94{@K~g%pPdz;IbECSdVOK&PqeaMMqHq@Y%tR{gbJKi5a`Qo#S6sP=wq8 z$fTW_%O@IlJ6n4oz+IU3A0+_D^k1(zXledY#KlIKR!32VM$*CAjOHEtTlTlKB3Lvu zG(yg%<^VM*nSWMC{v}Ln>EhxD;NSp(Kn%XV%H7OXSIWu`2@kRl5k7t%p?{SBx19g!@qg9S`JbAc zyxg4syXybS`oC2*fo9H<4tB^sT}1w8Y5rOHe`o%)q7cVlQ~zI3{1?vuc#8yD1WSnH zzZXpetBa&S4+TXGMNUdw!~N00%2OQ;U2^o3Bz$69g=ZZ{@Q9h(<|W>_+Bv0zBjm@Z z8Tt7st$~k!B6v)CWM=+@)cU0g!h-q-IW*nt_!#%-yal>^t#f;G_dr90A%^l8-ya2) z2IbKolD>M6_A{;qo!0-p$X^xTKSI4fmHyjS{wf07dW>;}*uuk4!uwN4|LR_>k{a@# z^8GuM+5JBvl;J2&xBqcW$N^FQ|878{(v^t5XzGiRzF^qH6BDBP`~7;(Jx^LsPG4V0 zo`Y00pKDrsdwbE!L>&i%aPu*wJ$N9X9RUM%_~CtGQ@yoSG?Y|0<8=kkqyHJi0AuM6 z+fMitlzrl79CW8=_uS>U^K&&?v@*E$y(@Wr-eqci-%Aj#`O!Qup53UCG;c1qq3STN zWN!Dc8P8nQI917ha&8&N@8U+Pmb^C0=!r~9^X~gviy{jU^ytYg-}>&YZtE)OShU~n zSkyay$(%z+iI==CJGK6(*foM}ShvPNq@eklimc?hiB~hbPtMlDH~yGev=6uG0AN&9 zIzPSNXquohhfz~-37yn_YJ_(0|gRXM3V)-U~52w|au!@_jW`BJj z>cu_#sCVf$jp@&uh*P;^)*y+;{_M?$YR_VxT=(L1gE(7NRn_Dw#nJXjgefj)lR&70 z+02>c2cMu}gY$bK4Xbpj zwXLk6g?BXPPf0Z%#P%XGLpQ&vW`D&@P&B{Y`smedZnx7l@TBIr;}gL3N=Pw-N2YEB zjGJ0_ka;M0t(L6jH`$ULHw}cZG|?G0dP*8XgPt_Mivu?CT&%5szALb^;AJu{bM{E# zu}9U_*Pm)UC)quo1-nF*sF#pMk_+gpTAK`+Bg~!zFi5r!1Vk zW`{|whC@2zxn;E)?GHTF^4}{>nUCf?{h(EN$mg)^k9K)XKX+?j!)Kd?lUg^S&B4*+ zx0$Aghy6FJC5hbBzHO9(%{n~JDh^E^*CJ&G)yZ0AOA-Td0&2Hh^e#t?u6!k%aGvv$j`p7i%ArRB+Qf7o4?ac7Ia)6ydF>XZ~P zY7DX9Fp5diJ}ETzx^Jklg%g?8j~2jx0t5rKy43o{ojxs?7g_YpIbZaU-j(<&IxD3D zdh%w9RFiLH+rMtFUhcPe+$*^?JniwHmPv?Od+WBpFO%WOTJ$N?UYs zi5n7bu1V-~Zc1qk*PiT_Ro-qS=Pi^o`#_v@-vG2c4xpR9sA!@cZ%~b*Dc35ZsI{MX zf5dmq@j*y%JH-OK!inyVwPcSGa(?6!Y_)~Y=S+~(%3;(dGX;GU1F=EpTokPXgG-w~ z&bnXzknB^M>_44y^?ql+BF8#spXiDfog#&~*;f!$%~;rX^4QNIz13a(+&m(u!D*vk zgRQLhd?1nj$>m{@V3+~41EXB0YfjvS$qe7`8WuM^p^A22VbuOA@WBpnk%o;qa84w>0@~F(q;cc-bf=VGhMBiSJO6oaxSGkS z3=aAU37fv8Y!oF zdxH5#*@+KCrG1wx_-tM zDVV?gKHsjnpIMVd{xClFC|vApGJ*VAy4Ce+H;-2**9L5=Hyo~vR@Qt?Dnrr3Zt%9< zcE5cEed}kMZy*p}@Lc?`kcDDeL+wt=71z&geS;{T47y!1$5l8M05_4wQt5;>S`GIk z*Vr7tlBboyw-|^I7&UBAF!aQGMTIj=*iuD|es}(RP@H|#%4L${w0gth^!JLF2Nk?m zVBWZ?J#l|owYv|vdeimvJWR}qfs}tVkvGn}I*^r^JwBSt!kIW=R&3#gF&zmU_kxm3 z%k3(jN$baIIMJK9{4o-NgR#K}YAJS?BikRGAk6LI^dQ?(JInK{6^zid8mAcejf z=6$pC6JrZ~%;Dtj(xwS`8~_uO`q1EE!vNew>kZnj+dKoS#BNFZ> zYXF`KZ7=HCdX=v2InjteCLSjzHJ^!yk3ZHMnJdKuY;#&hZK3c&Svt1 z8X|G|qxsvFyfdD<1b$x!jsCLgLyuiS_H<%?sf;`@mMbt4Q6oQ%@pZsS}8qj zMKumBTk+VmDF8KcJn1hLS=8scDN)n*5#uqNJPzwz3r44z`#*pFOoYf@zSjA(!W!cA z_#2&udDHOgP{N@#TJ#${Dv?)#Kgv=O^k1AvdzpafdbBbRkQdB;@m*jPwp70O92FZL zOF4^CV>P#3Z+kqfh>>(I`6x@M>PUD}Mb76mcIgO_?Rqq&jOI6tLqcqi9}HNundn9R zei^5c6`)7rMlVBJ6t!>Dw&a=Gaj7PRWPO!jEj2?|;CoX+@X`YvAMix4nL@~0jj;b{ zK7}7;)Is0nNZ`9$NE9h9oYhPPQm#=bX0&(*F-mE7Pw)=!oDnFix-YojYGlQw5|HDT zW<)DE=hUJK<6~%uXV#5iiDzY&S#KU1#T`9#pnE(hPYTGC#mHEs&GdqA5HEsGUh_2A$GB4!Kzr_&Rg1*QpA8{VQe9`Fr%Vs=%zk(IGWxANOVUvJ&qO<`x! z+-7fK4LdFy)DK;B+hbvV;SF@9`cStT+igATb7deeUYoCv8CbBHpTyVX)NQgrIi2fc zyZeImV$-Lf;RQ&Is<2*-DgB6Hg9cR>9Sr!iHL?eY+XHl@VI?pXb-kk~BP{x2A2OQV z`NVSqsUvOcQ>L{etjCvl1_f@v(xn^&WME{1iJP0)z%Nd#+o!^8D9SF&{e_A2qq9ka4$TZ|1AKN~SgPz67F0;~G z;Eue9HeXW@UI?@ZN59;rjB!367532Sgn4xN+#GtPM1c>)19mRhQ&8YOr-=b>a`?(| z_U}E~jEo>Ppj6j|=VRz-W6oPB)UlvbghGTI{Ia58)}oX=c=m+0)HIU-q&Wayda*U^&~;TQ#6Z+d6(5cWQTe`BbK28hK!y}zagFsR?0#UWr~cq5 zA799%N;$k`^LzPF{TFW5`V+57h~l2=hE!<8s(cEMk~1WyPk9<}O=X!l6+UsAwh=TG zm#OrqG5eC$%~mlOJ6h^@7x>wtH0dY5`#J{4idN<19c?+j?AGX{spj0cRDNv%5_;b6 zS)n#&K}>M7P>X_Pi9o>27I3Q_`*Tp>X_{8(LtSV+8CM=C&x-I{NvGxKKIWyeGG4z? z$U{48s)YCK3vp9+;EOj&yQ#%mOoZ7{jqh~2lGMj)>=x3{F85z?cnwAf>}{WeZjsh3 z*Z2teC4@(n^;^|=leGE8$-n+ zB5m2-+n6gvhQ9*w=BMo$-WkTNEjz*P`u-4<{#fQ5 z_bF(Qdz?(1efZS5H*t~Q4Mgf_$K}PjXgu+5hWjY?DrobU-R{Q&&zsz1X>@#qSE`{) zx|?lM+?OD9x+^C+ETXW`SApONO1WOcSMQ)Sd2AJnwN2Xqz`d6jBx9K8Gy&`+77BSBu zCa;%aLhC0=l0IH1u@GzWSW$7+S4sCdaqsia;S(diX7hnG@!!+b08cFX=W*B0qrOLP z*DmBJ__vheb7;2-3e-Gl(jP$wwB?@@Xlym5P5Dh!)jGfCM8`|$+I4f6Sn!~I=-;nD z?m0h+TTPo~w742}xtX_0W?}GaON$ryjJEyub^!7y{lmw*gWjs@_3uCU)LScy6LoeEpR9)22siDvq4-HY0Z3meW(&{&Qi}fx8qDAks5TOH&cq(z< zE(^yIXW@->Sp8`?X-Sad=8R!!>X_6`OkjJMfXraX@va4DGH(GEi)MC%ZP=5$k~zC< z2XXO-zHB<1+r3NLJS}@pl3wPRGYjEbLd37ruN~(-LUac3H(K#(dGfE4v`clBm&N-! zeU#)hvTT_f4f56O1fn}Kq{GLHfY9x>LE@n>xv%GwU60kbf5L2*qE@C$Ao$Mv3%W|P zOBWe_W(8t&oLS{_d=85(b_J!WoH?u`#x=t`mnW~=y=}=kUpcEeZQ2V$=Yu&u%djFwh*uW<1mh~EOKvJc(9j+t}fwn zTtpVfDK5Ye_Y8;&V&wGLcj1&slf?oK#^Tz+6BSRSD#oOZ+|xJx1s}yvwm2Csyy_ri z5DejA7NRxA1V9H8!0u-{=|Z5u_phoHw|1{FJhA;VFbh`KnTt+RP94E^*4WIXw>(H z(*R0FaX!lpCguLRRfJiZ_vKt4$Ij88k?t_a&Rrj@ijPu{mmu(y_cH?bgziLZCL1Mf z1RT(n_lBU`e9yJsq|;BKBsZlm3jkFJIfFY=E>pK_kuoxKx5*#i>u{R)V;s35c#s5}YBjH%yy+t{hJym1Oh-kf+zNLA z@qQN%5#FCc^r~{dL7(3+$qe_s0JGy>UA|O@W5%G z;D{_FIWqJS*xfX9k;=8?sepTNt~iL2^+{>rQ|vTns4BM7Ww&U4SX--_cCwDPF5iXK z-txD-R4;ub6b_Kqq@oC`2^{X1vYL9Irhal{B?Hshz=)FDW8_MmmsK)lr_IRSP5-)!({M{`7QQ4eCtP*mNYl;6K{5~`vb&)O|w*AOz z@xR0PuYnCCeKG9|auYOv1Jggo(NTr;{OHe;TLk_m7=9QcVW^JF7XOcknjsObFXbCW z`%f^uZAQXi&L|!D9}ykGLL#c`kcRUgVQ@pjFrpNO_UfMy|KEM{k8vatBN44yY+8}aY7X16|?$;y=;GUsUuIaeu(xoN#iY=45Onl4QIc1)ZbWm7GN|QtW@4#NmYA9z!|+~%zTP~X6gUJt+M?W%P4AlM zeTQjQt0WWpmF~~CCkkq|K1wHi$M-S6DC(Qnjtda*(Y;@$Y28))JOlee*u)+a0pSTr zr(d+c!^J2hm1_Z@CZb;u!uvJnZ_dd_hQC%FHPoC2U;U{&4}uv{U}|HhIABMCzs0v1 zxRHriLg6^WdzLF|3+5bzMATVg{c-*@-j81{l8bOmf9p`DKM@%Zzw&EW#-%lkjnIMvJofOvRp3@Ey?E|TflsjJKHEO zEF9cq|7<<@`{(_2T59-5(H~)BI_{-!y*p*f!eRJciz*;k&DWgsH=f^)$;WlSg+9%t z`&f1VDnm<+AQ=m&G8>T~<<2j-C}-EGaKn+0p>6DJ5*_-L;z}M@P=I8%=TgX~E7&;h zyMl(MOOC}FTU@8C<>TLG4JyQi8#l@%l^f(GaTu$9yI_wOP?srIZ*H2{7Y4y;h22ly+jY->0fQ-L zO7y4Z?OnYx`<>c6hH%f$&(-VmNaj>?PY~0the1eB;E+w5=OtjJ#U~i)MPRUix?Hq> zxh+&EdTz>gHbKr3!+U?aiT-o0cAszeY{ur|U;$em3?Rj6Qh}%CK^N-h6z+3Deyz>Y{R>u;*r^fA` z6Svq&TX@H*?>^%yrV2%|Xul~izAVtWK0DJomcIXTEb9LL`bacB_2BTZ7F0Cku-cM0 zWx+B2OHqG!Jm&M>l?Jo=k7rlFj#h1mxV4Wu_U$x-Fc>vd1 z?JA2Q33Y}@-k0vt5ew%e97P~VrwfwKKb$HWB4ixP3?m*K-@?q1)L~7ay(>-Fq&pk1 zGX3j-4gaL1bLKa6_lzDD_7&q0Z7m2sHX3-x_4{LpBNCMBlK$zrCL2*@~I33 zzd^@es?I$mFklRlY;MJ?wKCx11D#x4UN`IfoIV`vT69g7Ky6I$$@1}qpV8N_FkFLn zmgC$GmOs}-ohjfRBciKJM+?=D&!J<0{TN-3`ztmaDiOdmgS&=rjycv`=1|;~R9qJ*L>oaNWVTYjz1bYVlR{rOiPZta$a&!sdgndVX$-$+%k|msvlBID!f1 zs=~PSE7IPSgXO`t*5#t9M6-}Sf(iQ5Z4IpGD0uU(F&BTi?LDbTZQz4is(^Kcdd&jb3)lGDo74G>vgE>6nd-sZaLx3SOT}8>(P7^RtU$`L=K}1 zfaX_3s?-OPkqN82DaMtL?LUD9a)-IVw5$YiNm)YBo3C22jLPzu&tI#QINcQ#C$N4G ze-8n|wwM=@fHocfMU8{tYGgv=iYvdc33!37P7J;u#2ZJmrLk0tVtbj}=IemhzG|!A z)_@cOZU${xs#ykBMw%Z?(`y&s^OL}QQO5}p(eb=)RgP408di5}Zz;RMk6_gWLnm)% z5R|O+b)g?X-Vjw^a>lCa$wSmTN@2&bKj8b-cx_?{G;P1&+;r7-AvS+fA3LzEhoh$k({RNjXi+sWJEBtAS|t7);7I_nWnSAm+KvGh*5djWi( zk7IS?xT@C~LPc^jIkqpkP2M^cQ?vW=mVVGKH!MKXii4Nx7NY~ohsIP?RC_FAtZ%)3 znYs!KVc%a-Pwu+Bb$wGHzM&M1rMahq^fDKMD$4gjSrf*Wh4a+XrYV$pIHN@s#&H&5?cOnt^K>Tw%eceEr6q7r%C_cfHsc$7jT$H3$C_lJ1+ z`R5b8H?=Xc*XO`B=(wV1uGDC2j1;xCzLsXe>VT;nxwN%g>atXfl*1e>NST|ror zi8!oQ$m^cRYVXYk!P;V;S$IB==||&@eXKG^_$Bj{#gMg7XPMf!XtIhog98R0L5#77 zf*%WCVp2t;UF_!_jl~Hvo<^(H9nKd;QEAta-oq{y)K?OObZm$qt|s2T02@@8)!~$W_D}mdg$d%MOVj%V<{_6X^uvIy%CRTJIUAE*xbCZ`IM6 zmD2cMDe1Oan}I(f0)wL^r$Zxj2SmfCYQq_E(wn0&;5O*@)Zy`xcl2CidhKPE%MoSRJMGlGWlAr&~&})VrD4( zcd3CiNd+X6)4nyqx_ygL6y2AK{p&{r$F0AU`oEo+0H)_EYS$m7{D>a)jho+k00;Ax z%!Xw6^OSc8j+h!e$Xl0xFBY7fp3WHYHc%u21_St{zc))lsQ}Ez@jkFd_g#7U*w;=! ze&c+%3i%?_Zd0EX5i+;>EAHY95B(SNT3U4NYKBRp4kcuR0q+;g_o~I<-Zhp3Sha)T z=4)pC+2OsWGls8K0T7CD_C@e(xj2T2RpQ^3{ zQ1l8x!QkO&DpGFysrg1>p)Z1>4Mu=6J{@=Arolcj2YG99K7#^LZ;pn|!RY`+Y^&8a zU5}$my~t=@;sa5S^)ID@9&0G_V2N`yg+La~{<$90RqIXBTL}Xja?j)Ahd9VncIrQ< zdmpeJEB%(XOAcMVh5om5Wl!TN^A|Ae4%H-M+E>J!SDp3!KE2l%|`u@%gP1Iw7!FB7UXRD`(De3)C*X_k6W1%KF z5jK>=Ae(+jt2lMB2Bx2@Iw;jfkk;+bczd|4$S3MIvX?M(l@QebgV}D!&j_i8-QiH= zS2s%Basl=yYBw&sqti+^g7+_!MO(0IMbPh3Uob1Oj-jI8-DZwQJ-k`PUytp0)o|WV zZ|4|Rf|ns=Iljd_-6SqK$k8TiKqcfMA4kHGEn>G63nWuqxu%Vrj*TSWJ;)oXQaY7D zR-QH#9x zQcr=~v34!K2&04h=qoV49PY;PH;9m>R&%25`5m~q#tdQv@VugE%vDmH8BEAa*&Ynq z=VcLO)!%>yJpPmI*u~Nj;0teLW!RLyeLyZAE9BAPjjay-8?pmGjxRRq7Q=bS7@Qpg z<8*@*nv78y?h41jm-R*~QYwqOr~`O0`t9P@cN9nXYpOSml8w0Oxib&DlpbcuK%(z` z*9Wtqo^Hpl>nTM%(^7aG=#rwTh?teQ%-`-CBE^TPsN)XCX|(U|WRdVCLF3_N`!m>; zMBi1J)$DT6+4q7mBb7#Z-cTS!kU>dd39-ahf1Zlr^EJ-xb}tWeo@I$a5nS|U&zs%o zZ8(c2Q%QBCwmWXnQR`jFk^paXYJ&{>AZo*&5+QcUoNXpjF%HFzDlmyPSi#Lf`m|Ng z8T^9y0bJ(tpN))GYjj*RyxkEo+BLksuLK8NgVo!cEY%ZMo{%N{O+=oT05@jI_GdJOEMb!UD z#9PW(%9A?pCgd2~;hsUH(J@%8ew}Wo`A5yyr4T|v+y_P@t1~=|Oq=>+mFpNfX5*^^ zo`w%3o`};YE1oA}RG`BvyFFpgX9*9Rcx1D!{$A@o77lF&eQOWC!%TaE9KP`Y}7Oi@$PpQjY?P~+h3My;lZ&%+=v!uTOCmW4n$?=Lq7vk ze?Q^yDiL8ZOMrttw6?#GTCdVrfLfWR6tze_Jmye-J?2Q>NuIdHaN)&*^20Rex5ZQu zV0l9lAJ3=Z5d3rei{qs@7_4pTOYhLMlH`3Fr&oF>49=Y&RRA#Auz0}5wdbfZU;dR; zVQe+r@SX5Eu&@72x>|DBucHs9&^y@Cd&cCO$3%B``Q)JmGX8VW36`v`@MWa&wt&M> zrn^A7pa=6gk|p+R(@W+e8hU4ofKlL!-)L_rM&%LqsV{1z)cX&@vOLtymeU&J8O&J7 zW=a@)kuIq8Cj0THrK@d$T#ZdCh$zaf#O&L%iDXvoos-?E=^8Dzrt=*FGsTQ_tgT^Y zgxyWG$}8PyS(sNy`>D`ERal1Y8DpWG_uts3G{3OEvMs*o9D=$q=3^B&JMGW(Y7kc+ zu&OtGCEL*^b8&Ujda(@+zHTDC-aDAqcLL%0Ml!hWt!2W`Nsg)*&_^gK4lbj@VAhe= znwG0w^i{8@VvGyLe6jFzQm-lbU;BQUDk+6jO8?%3MiCu$iQijrq}w=5>i&MfqZY}E zAf4Q|Rc3lz(+g*!Wz_ubWwVf->~8Z)*dwe)Vo19@#zC41{bE;CD)(|g$>ZEoEpNH? z+MhuwJ?kJJX@IGmnxu%8HF3sb3-;(^K0Vu}CrSEF*ef$TJe_wcgw}j@Q1GmaRs73I z-YDmcz3CNm&VXGvXOn2)57X-n?h#9Ok^1M;+K=IBO|lF5esvEG*~o#Jij*odqNi|=Y&iDo2l~F{>FDdL z$Fz79Rbs%Djho`@sQKSJ)BxrmE9N$zp54koddZpZ=xKIb-}ybMsAWb52(q{iuJLKY zi&eps0#>WIg=$Sa+x@XYa6v~gBf{(0Mtir%68qbQ%{n;De?zop4EkMQYu!L)b5jI)u%Ly3>2Wn~JE z?5#7~k?ac(XjY@KVza%>>i5CZs%o*u7|zWCm5HvP!jGNagAAlwZw$|rT@7_B3F-3C zua5Hw584T*Z=Fs~IGbE{!_+j<`VrbZ{K64Ube5mC_byGiw!u~0n+F4uuNYuuhSkNM z7?7zj&3zY!!Nb*N(T7_o^{YvI=^h2)7@f-c`hpiDERU_85pIEu|r3+m+@*sB3>-vXWSO~!!_o8xSq^=xk@n-D}7d8RUz#ak_Ac=?TV>8 zs2d)}R|=%NaZr+C{i4`Wk=jQ;*WNrvP^R(~kY%fvOLOo6eBqC0Sd5Kk$1>>9Psh@8 z`YgL(w--0JK!JgZtJD0%_NQq0eHCek5KhBR5VGws2M8LN-; z{Z4@WUs|->uL6F)^XomNm>!D}P8}`taQBYi*REI@$84&%nUWZRQfs?3T^ub0{T@!s z1|Fo~q`J&1mYMdi6<6O!{y_L+E8CVB^S>>B_-u(NnR8$gLzPz~recPJs%56@G_k?f z1WvL%q5+-9zQ#J8Y$tzR6v2ha)rN((5%Erj5qq|P?M<7z7OmYw2Yb1)Moayr|e zN6u%3I3?t9+2a8-d>~k#8z^g}Z-(7vdQ z+y`_#_1mf@x>h?oyBa*YF^&w^S)8($DL-aZTKnynYJBr3*v33|Z||m>o5qW`vCXPQ zXe)J_ym=dkJI?{N*8SUC0~Ff@^uDdWltM`;862NfnkWwkoIJcbA9vDSd4NA8$Zv-R z&QpXy&(;DMZkV+tZZ8Km zLf1ZW`$ATyT)aIh&h;_i{DCy@j=(tg4AaXvUB9=rq;kQEhIcL87GQX4M`i}`7 z;H`5?mHSV<1je4ja|bWi*3chsK%-%r(_X_bM-MU>AO_@P-?r0{2o zDjL>&UBr1w6TKPg*Qmj+8P+0w;v4xR0jiRblac9^k2gx^f{O8IY8M)5Oos z3x#3j1&QI${Izm)dv(@0Y{E`lYS8Y^M3PFK$_&}wtK+G-J3O-T)$O&coI=ykjh>>9 z@v#~lD#!p_>m+3pqN6X;A8Pqx@?;Q#dGgF-_Ychse(w1Soy?v!fx|*wtIOiX_8F}* z&&Vw+56?;1cXMBEz;4n8rC!7Uq?=pZci9={ai|hT@7?3Q4(6SvKZ-`DX=5o%cx~-C zk#SX4bcP1cIV=`yBxDS}M-*$asO+lc%Pv}s8R`hv$D7#^^-nb zR-D|uf!1>%N=re{288s|i-SdjRb3B$0zWKngq=#tG8l+8Gdv>_%dVO^-Muy0(mK*ZR z-iDKK)bbT|x~}dpIv<=1vjja!8IY=s2OcBI>cDb?RwZ;?%9yy`#OL7G?CiUyvVETj zLXH=6GMs_Dcpkg(u%^oc z){67#UEp-PgEGr&=UYbNM=EXQ#3i$nKb+!fQwspRybk=1j8WiO(>6fDv(gi(j>R~{ z(W;mK@33R+ByK3XN-3Ycdj>wR7W~*wbWko!oJ}cCtV4lEcW#gfX(vDb(_ntxT5#`pvQ{D2RuMR z%j0gYKlx-UYs6GJcR+6E`DZoy(grqrs}C1a&wRHwXidCVSP~2LZqEk*pAk#+^}(0g znwI^1)-!*12S*nfWeL^go-ZKLZw zJ6KFqd&B$2;5Ua(aoyOizh3?R8ziOh#Jlw_&vQop%;0W`u1dse@Y&_tw%-gROZ~2g zE|S(yf1V&6Xrb!+j_XP>At|S6Pq1Hb#2ubg@Y?VVQo9BQ1f{P426)~NmWPPgY~S>I zGgL)MCfic(+WC_REd0U)sZ$I`zZVl-X1tTe`v|&%LjC_p#AK`cKUpkUs*g+R2!Q<* zbPc@K?lA~rd+{dw6lY(3NsYe9q4ql|yMm$Lbr`W!B_vs>rFB63Sz3gh&4<-CUk#@X z(u$SH z6s8N=)m6z0P1Z?zDXM6%6xP3WaWn>Z6t5mJ8TvfP5bi+Za+Q#FBZ#EPDL(c!6LDx2 z)xsDuh;JgIXj5{znrouW?H=it%cInvEe10Vw)LJ{Jg?6o5=nfr7@k-=$Xa)7BrBE6qtYOH~+pKHY)I4FUv7wb;QG zvylgxG&sHrM5p3e%Oaa_>5&E$Wx=sbTH9@U8Y!(uLHO1?s1N zASf}U>c#4@wwZqh3Xcf5@JnT8R73(V{8G5p(eCIx%P4k~AtiWuJIj&s(UctOPcO^W zn6pQucbz0{=7^ZH{Z?v*%yF)km<2Wzg`VT0;_c1p7yODfV;wNGF%^ooq~;N;$EE%* ztp7sq^_N*Y{aTvCcxFIkC^0>x8SZxIdf2ks0#S?5{^<^Xlfv(Yn!v0{D`kN`x9&yC{G0QKpiNkd>}q~||Bs)iZHB2u zUP-tBD)}YYfYSa~5Ym>E*m8*;uAUWK&U6;%_*N?TSsRy6O|4y4iHKgQ>5HcUj1aTJ- zpf5!x5V%@T{e=5i+02br!5h#ncVXk=HMy=IF24)-@x>i7lcy^uzo*-u&6g$Y8~8km zOK=V*lY_!0yCvH^-sYVe53auwxlOCcS4^~9q2i7`d#Y&K(rf_&ELKG8A>NDfHHla^ zG__S3k^?w1Gn%Ljv9(0nd_}YOx3a)vhK|N4hb$>)7`v zS>4nAMtNSFzf_z#mGf;l_|EW8#ehjq>|2>y-v@Wo-Ea1V(n0WFzE-PKUa8NJu|q_s zE2W4TOTm6ZzC~(qYO`DG?D)mkd2!s)nxx#B*@yqW&|a4~0|eWx$OIoR$2__qH^H4A zjLYxeOKo<^?b@AE-?JCq27qWClU+}~+g3Zwu+-e#VD2@VgO|%;8~di{M-hcF92{oH zq9KzEatimJ*SS@j4CB(AvilumGez3lVJ@&$;YE72;%fq6zq55V#RK|eCK+#G(fi_s z%=mx(FMzo88g=ce7l8|q(H|`6-cTz3uj!#0P>uqnY&y`9T3lmivTQss`1(BW?Pf#C z&qv$&i`AB+zUm|oGGgq8$Z6!?eM-jK{(^W7s6J8ykql0bnbEX6qvDS27Crt8a?RIB z!mQC$KEwQ+q@U)Vfb*n5DFAjcmeFekkI6CR;0kkGZy1C+q-acgw-PfdWqXMO^9QlA zU>j@j4n=*_GA(*}mDAzmXH=cLUW*+1;t4=#ex9A)WM<~)_(6(cC~nN2dj61N{mRO= zJ{k@dadsY~l1jCSbSUH9wg)CmDtH(~U}Ix+fH%?TXyArbD(IzT#j+^!*`-o+QNz&$ z119m9!oS=kDTw{Pc}f_9hxGcVKVRyR9v6K+_9GBR^$2N4w%4ac@MTasU&wwT4E{sB z7ryjk?Vk5ne`%csQ|klnIoT(<^@ukHZ)AT#72;A-ICPj~7f8l(F`FC?as}ti%4@8o zDqD#)2NTqdqB8tY=#Q5**@0`KLVr2EnlV~fgQlMMMiPI}#W}ql8H@hru*9y8Fa=FU zLSIKxXDErfXHs?3T{fO)>EuW7oOg%|e6L$YV5u2Q$T^*Fe-I#W(u`SMzP<3c@APS>hF`F~^>2Cd@R#jnU>m3Mw0ZjF2z4*qlIq`uQa}jyvd4oS+I_R; z8)c-B4|A7fh}yD!es8~=yHC3^Ix<0LY1Uk)@u}-|(;ci)sQA}3DHm^4C_}{2hM2EJ z8PDo0tuOIHSHQi}_Mi%4Z^k9%jEM4SvKs1`c8b1>ykmY$$r#VIlj5>+)Fo(AyV_M*gu^55r(jI3h{knW38 zPmQH7)Pj5W+#VpU46~CvcjL*MQ>im2*~ z(v5^&Qr2<9Wu&pDZoXYwJv5%z6mE+;_rMo3FN9Lr4RSdz_XctOH4DZDLqiy0GyYgj zP2*iTeJ+b(<;1A>IUlH?L1H{uj+u+S44jQB8bgWrls z)r=*6T!h_D$S;7Z)r(O%RYaG|`WBioEwxnwt=ic+#X!Hmy)m+x3zu`po@nt)QD><4 z)liSvgN&aVZ-lxwI+B4YwaZ^B%;XGn(6FLCr2ItC5Zi7_G<&%*DqRERa~B3u*}-na zSkq*xh`_FQ@t4EhvS)UIqTCiQ=+1ciV611i1q8qZXXtyRsK&%wd(`B z+2*a7Q>2!f(}R13w&Z&K|JeJ^u&B0WTSP!I3L;9Dj0i~1NkDQ=k`W|>^HlS2bd({FjsJ?HLo@80{@`}Mx}ufK+#Ypt3!=c+lX#;7YP1mB08 zvSc>01*+M|U2=-9&z?DjM`BXnib{*r$AGszt*f0U!5X*7E_wV@Xbh%<{-YwK^B7w7 zqj)SA?61d9(BPdcpKkuwR{3wpB1fFMI1u4m6m_zu_B)D`A!Y1K6^v}TA!9cl86-L6 z-adY}<}W6Hjw8h|6U{9;ioA71|C`}7S30LFxUJ5Tm_#0sMjHkEQZXiPosO>g%bM!n zV}X2t!bmW@0tMwyiZ%XK8oOc%6c{5d!GVpSKY2C(>xWkdKyf1S0D_A0uiyPiS!_TS zP!)Wag}_?7`Rj=hUupP#!3HEY%AXAJPXm6ILv^pO=h$_2WPL*T8R{Zax%`>{yYMdx zCxJTUhfe}tGOK^gC*8X>MUC}4#JnWCe%<1O3~2NABAd1KtbstpTD=iTsHmv8I_=tj z{h$nJrKP3cyfYUVY)qyrUbGK?$pTA-Cq`E+M}MX)CHFvV*?YqegQ|tNNWn|XUYIAII zKY<$L+n2J`HFAjQ@VgpP#P2e0t;d79hb~mI_G~rxK`V0Q`Ov;1FE8&NNTa9MZ`fGf zPDn^Tk+aD=;%jhWPG;SenQVHuoF!jF$1t0_olyKReXYy0@1QU7=w-fVC$S1n>pw?l zDKag)rhQjWd8+H{b9|dz)SGLI%=H0zm^=VMR$E#SaLu^RC>%1#J})2I*vm))RA3~1 zMG>ZqFOj#oF?SZFf9{h%Q3BfA#h;BAak>c_g0DF-3?8v{5TZ7z7Zw;UHpi67uD-XY zsN%S>kh^m||Eb&Td{NM13CKEpzMlb9oF?q2auqPbxtlvc_|2yveW1Fig1()`C&~v# zHJXMu|DMZXsWmx^IgUOtYk_%XEsm|a@Rg@tdY3HR)A{+H{AWZms;BRx-tRR6ayero92yh6@q!EmMe8BKJ5ZBk(V#b@ zh!B;He2d|v)Nb4Z$tu&X9CqV2^=MG%?nSzXv+*2vk=0mA$DUhn6{xH9hbalmpj%&@*3x@r;ENvWr*ItR`x;cmERpCo@_M|N5vMgOK^2O%v z)E79n^^I?_b`!aq)wqE@a_5AL+i%FUSf}Jg6R(3T884C%!X?_)c0RgJM6a0QPcGG? zpTS@C2&yw#`KPt=rb#H#&@GcAXo1qsa$?C22I?Kh{ypiA$&JSWAkfn70xegyn5iL} zH{V2&A&wruPvC7;O}I2bCDNY0kkLK+I%N9sbmBykS!qwxo3#t7g`dX)OJ%flU!yYr z0+ZHC6>$)tC#KzOC^wWvc)R7)ku~(XO8B-V+VaJh?7W|;ws=06E}Dc-DJtQxWFttE zv*(Rl_uZ|B@`Ygzlf5w{WlNEacKJ1cX=Rk&Z0zDZ+o76+d~X~=f4tlgu*Gh~3`-Rp z>YvxIy$-Q*!79_`)IdW(MyDuzBr`y;JIV8%UKhPKlQiFwwZqmBS8;wv0pHtN-17>3 z(e$gH^>IrYZ>!r(3L_fCnJ6!M+U(kK1f)>;?kRcq1dLR6B(NbEQ``w;!v-f%u zdD%jnxrRXXmuUO(-bm8v3hDl7JHjSdt5s)MldFEj=S&UhHPXKVJ| zWQZgmo8KvF10$6)+zVBr@trk=3yo8mQF}w8v*KVdwY=`$Sk`dp$!ZiaKId|?pUOhk zR^&AE(2dD1yG4DQewPS1r>H=Vu3vQ}I+l`nOwdHBu=xm)aE37orzW9mKb z$ZsFy;U5-}T0UPIxk4&JHTpCGH4A(S5xLt89O41S625PMxPT$PX1ZY1K1H`jWf zhuq%A0_am2Polgp8LM9V_a?cmFWFYMBVA2)TYddvMnzBvWW{d8br z%6}Uh*q3v=u?=N;13mK)b0{Z2?;VeCjPGQoXK)w|`0=Mai7PDh+VMtLCXKYYWHp?cW$UHzrJBEo6xCe&!5A~4?o+q<6J;8<&mM2uldlk9j%*N7 zaalj{^1C3sw;ei>t<2L@q0`__vqKdfaZnR&t70f+LssFi#Qkmq#cKhO9SGH!bzH4w zB`O%qC)Mg7oq31&tg+e%XaUVuTf}|AP37btk7x=vH&kx@K|xp~m7OsDGvT&IHjiO; zQyjbv)>mIl)A}taaz2Kr6DIJW^Yp!;OTJDH31MvY&7Up&Frjb4$o5Q%(mlUXurODr zVySO3ZSuvhmE%i8_%1YYwud;^u$WHalj zeZ;%_KA-ntd6WAMpR9C{7jb|SsdnS0MLqU#irzIeoG?r9xxT#)pOeUjPKZV|oV@Q5 zX8ajPyuZ73@LjXUTROk^rk6R9Gi~IE%!w9<9UZu@MBGqtoC})7ppq(8R+CQ44lRj!fAjkSo zPoP1J4DOW;$qp`#t$C)e%_!}MbI)KXp=ioB$~#(h6KeW`?XPxy=nnXW%`r@sndfC+7v=!7LV5YQ_=%;N7-iEius zrWTaONdgKlxcDte^fR|*P}fdc=Ce9^*TTI5b#g-@YjaQ3m7xP<>=!;?gV(Xn-3l#E z*^fbRf1KqYbzF)bI3c6Flo0RBSg(B%VPUJQnWDbudyF1yz89D3(*nr0IBXke-fZ=m z+1VaCcOIBvR)X8@l17&=GpO~OWd)@r)ZnWj_$?~pou76e>@H3=^(TFUtk6WR0$E?G zNVun@)cB)pgI9&up(Mi+?rkj(m(4)5=zj`qi?2)*sRrI%O}K(?oH`U24`*f z=U>FI7s`^IDQ!e$NdR@JiPm2HU||w7_*kACGi;bJKGONs+iUB$h0P1)$bKF}(r2E_ z^&evOlA{5a%pHilyX`7rI(i;%Zz+?}{D%46ZtVJsU~)(_&lj?m+pAqJj@Kf=83{=; zBRVhj)!j9@AHp&v(Sivm@SZ`ZMW=zigz1BUQZq>=Z2(R~XYhEZRP915@-qf*x_Dw= za@_T4qWf5ZdyWiES+x360SEX^ET|G4e6FcTbj}7<By{8+ySQGFAVoASxofJVy(ec61icXHZa{wT%0WNsxag z6DKQV``b!`KwDS?hiB^YN6h`2HQuW-%)1Y4`b_~#Eq;viji7b5BW$%svtWg~KJ!gR z&|h+HA%)_PH=zycJks*ax}~LoaN_fZlV9dT(?b&0Y{Jz|)laJyzACv06wl7I0jGw( zP+)^NVvk-If3@wZbNWT!_DlTPG3;PAwqQ_C3V9$(y?^F~v(e8g6RA^T5LFMIp#^)o z@-&#+Q}|MyqLdkHn=VJUQ|al;qE+ye%)xBR5>X3Vo(6#Tpy&cR;=ZwSpN>v|HF8RA zEG8!CcgVR&LOY?5d@EVF$Yxr-{K4;&S}ViBhC4pS*@e@yi3YOBbYX>?6K_Bg!d#Ea zNL#*WP1-scbL)&8-QX&-A8m8I0XYAnh4qg%E6r~bY~Ol}$lG^b)G>xUWbZpJcxb*? zieBBh(Iy)@0Vr3vAzOCMnC(d!ZRV@7lu`s`K5V1-!nKt)tBcmtcuSQuEZ7o?zBF=f z!>9yilqd@k;rEx4e;VK&G@mD?DvQLX>D z({e)Oq6*D+TMNMf8+*H3vn1Dn>4{vPm+JHC7o4pg02*_{XZhu1AlJFUchCM5P6Ggp zD>?MvoLN`esV8lZjE#+pr)x#<2;QkR*b!}CX`^>xEQ!x? zwjo#6$Q79_DVkqV+Y(+a1I-apULVnd_}%0+lL)9pWaWl){U{GVt}I+2d^|>%O4)uZ zxSmKyxg%l^3<8cq&xT#@yv>4T&h(ozB zJGt=@*eRj7np}1$cLNw7_^0gpt+Cy>;}Obyu56o;=#ub=#dvo^jeBO26hKu#npx`h zgUT8l(1&;S;}dSS4e$0SfqTYvi%9u>y96pg4wUcZuzXX+aeWzih{|8VI4dy1&zG@? zfmZXL53%Wf=>zsdfUryY*mpQ2fnLKq9uCTv>7s4C|8vJp;~v6z7v#kL=b_p^zBLgI zcnX}8KWqckSy_&Cabe~%5J#N~gl+0?KrR$iNW6Fi_9asJxKZb??(;YP<==!>Iym`o z?ys?}-^X)mZ?uQdXiAR^;=l64-&175`;CiK%^y(#{Iu8q4Njd_0qq2W|Vt1 zSODLiSDqW~-;m~?4EmW8xL|G%g=O&HY6U3$)(TL|dH#2H5I@JHo}m&s8P+Fw=Zw^rV4*XHu!?-$e8EsG2{?R{b}eGr;g$ z?}2Ib;oq6`|GU;dt@AgobvL|Z*e<74_YMWT3n;dKux7r8r51ar-7J--5rcsht17jY z`v`3}>F|(k_d^>@pr67rBz1B2{kp(yQfiYQ5;in&=riO+fX*a-4_nnA^>KE4$Rm?V zWM`vG#p7(0|Ni=aegqSlR4y+wH9i05um0zs|DPJ`YYd&^&dzR(s~}J(OSs}kC|M^z)f&G4^HYx%jsg0ed{;M1Kk;NS#i!l>RT%{+W@!%KCTQsk&Bq z75|O7|L+d<1KwjzA?U35;m&7%w*Z@@k5MgG>V*4fW7maqr>APHLdH_=%ZsK$FJFm( z5hlmWTj|+4nE%hPN0QiE!>j-hf0vCb>KEPS2cIHG_G)a?y$keIr#4m&C7NF+uPg)V zw&lyob?vv$&u6)2>-`!Y4v^tvOTGOc!o;OvEv zG4+HTMFOSM3O_$TqE{)0#=VfP7N2GJc>N!9)X2WV-f=v)HH-ca>apeqQ+E7A<7)H{^RASeB z&rT#AdQ(IP9|+m@z;=d2;O0{cU7R;PZOYL_GLUx+C^0m;0KB>KH6oqc5Ce9=AW&^S z<=?9(;J4Ls%k7+PzxW+?*#EgI-h5)DEv}bsM$BSO)9ewi9Y3>HPNn_z$Q{}zK;Ar2 z3TthiX{{@Kz0d@*&QovCpCaOQeih37VN$Ljii&~5tQ*rphKd^>R+t4&?1|2j&yhOW z?2gnA>}$OBa*@kAq@Xk4N8&i^!8_cVoz9GF4xT9O$tH$v-ctOS^?RhjqJcQ`y+vf- z_X-N|5<$J57&vT!FF6YrL@i&$lNxKr0`k9Ux@xswv0s#ResnV6bZyIocs41fbLgz? zd#E%zu9~0Lisi`#rKz{-<&R+fs44ivdJum|e4K0oPEU%oE7U2|)+o|}yIi-(OV4xn zFj`@4ys+A7d}*vfO$Ue<1&SjZSMSpq&S?fdIR#X6C#Ib5mW!ttqoR&R^&1X<0sMfB z4{N81whSXk!);#m94iSI{O~&iW^46A*XEXzP=eYxlqcV)4O%~N1tIWN8yPN%8SrcT zZpThY;hy&;-Uo*eP+4c+gaS(I44=u>j)f!SiThI8DWX6JH>{kd3-1NI_DbLPW^@>< z{F)ms(Wy|%U-@YaN}d3;HNqs23$Zou9P@GSyQc7mRb}nXlvJqwB}c55JuW9}#9jr& zhT0OrTr6K$@nUC@`ZQSy%nx|tI74c_#7&x z2ms>DfEw}m2gq#RyTyt;pRDJk$48cWn2Sw5Di}6;S?i3-w$m$^gd%#}HN9DH@MHKK zmYOq2-u(DgIV#p4&?!UjydgVQEGNROQJkXN;Fa!>KcqAH{pAAiR5 za0k_D65@P;VXeBYK(y0y@8c-L$4g>!*pXP};`(T=OuUU&pMsLT*Jsn#hP}xfgMwRL znM^S-TOH&^k<%*LA7JD30_&_c(bT+1W8J0Z7Eq|A11&T<8{FUXAs{EL{c%)MRC{-| zOf%c)^F z!&%F+?W;zoefQ8Oti`_hJrUf-R)*v$%%bGcjEybzhe2;HYR~C%j zdowtvFpXcPXv21Ncwa{*pq7b3k1x^hM)%{fr-Q`idCmd#2i*A{_$2C}A2@XU*OL-Q1`%s){_ zl8O?f@A+U9b*Wa`0j#r)giHCknxpJIr7y4b8tiT*d<;_W<76W<3<;n%UCaCCm$zNc zu%-LG8O*J&*3&;)#5f3GKN5_$e*tu!&0Zp<$s~~y*ZuRl+gA77hUwA8BN}KAUlv}! zcB$Wy4-O<`D5Leb6^=gLG|Oi==rN=c4@xY$@e3Osp!Ej71QdF@ardJ;A66WaEVmI1 zd{7@X_tcXpsd0ewq(4%j1FMnp~qR+{O99qkpxF8rnnp zXOLe;YV>_Hc0V7zI)B;?KbwzQPFYT@q`F-Sgiwt5RLV@(7wC|XQYxkhOI}h5nvUI{ zs<_D4EPvZAC@iY}JK*`RU5I)D6YUD+xBh(`*kN&Mf({}d!aJB}-8pyjpp)fH;bwoj z0{5JNw!7a9A?dgGag~8T)Oh8)t5uN%a%auPXo5*C-*8&=^lF!gh)!IUi1m?ii>GzH z7or=&X*WPuq*qjmyE`p1VW666b^J)28V`M!Gb3@j!dPLRt#xB8vElH*_M4Ok3-e_ zp%i6rziEjNw>}DKpmAv$>%+yRwDR@2i}Y%f6JrZqIE<xJhrSsapj za!K4(=M$QuvdckY0~Gy`IhK`S;u3o?+bMe|1F0!1ThK=Ka28`oYQsU+;t?xR8e{uI zfxI#*O|-9XVI37*NJjfQ!ma*c(ZpZ{onMJ1)ATY^iq1Q~oFpEip%ZhZ*B>U3&vHRv z0uk{0%%p2wk!1nphiMuj0m-oCpb&z=dPl1DI|&v1bIqzSU^y*tEnkw_>BZ&}@=G>r zdb2BWrJwHI651}oNR2)7Vy>EX?^`34G;CqM=lLk!GXoi|ze)|o?=OFQ+2e6ah6mL{ zugDUsoKtw||NM3WF^hcxl= zEylyjq|i3@n!ARlHLvZQS||Z!WiDbzY3$8pqs6&iTEwl@cvCVK^!~enk44qz65?QhjxJaa&L z{OD$#A3Qqm3V*^WGUrH;^TR8MS5Z9Qv1~7^_gF$?aJw@npJmXU!HWHBkBabJl6cU^ znJTln%O1vmuLWRPU@D>Bql7lR5~{QR3Z5WI02Jf(;_adfm9rA#<_t;CN*7umb;`UF zKo|}Rd9212Nwu?|Ki#Ns?y4@(I?hCd*v%tqeXYmkNq5!WGCiC?dl(@t_pJ>3OClgg zWwn>_eXL4Jg9y#xZA0_MU}9%&ZLKlL@i*yj?%26fTrE{#rsox8(6a zmdv=F-@C2tq146NB-{Dos>dVPA#pb{7x(NRZNGhZE*r;RW!?0JR{u#&6oT_tfQY*% zc-++C;1g>>@P>(fBwStpo*sIc=;-O_&1@;F(*e#4x{<(o%}|Wzuxeb?>QF5Sgo+5Io{R5XYuAO}}zomq_Bk7eg zWSeUpipuC~cEo#0T3eB-_a-2gmg7IZPbF(9_P^aQZ3z1gn`aPVt6g|A?EknoUKSx} z-21E*S!P5#lUZ{XUM?arR(SCg;4fb)7Hn44If&jUqLl5$eyY&Uonr70&vg1Vc%Xc8 zD0*x|hWTrC$-QA_Irr%=a+onC;-yb+VlHz$Bhk5(^LE<|pzoXgnwVJY%HUurzs$>_ zbiQ`bov9H_HeK;ZYiYSxW!ku*?JQ?eegZKiqEkH*t!ohqY=wk{k08GK?LoHL~WKhvMIQ{Pr}dS zO+gTz_i_TWmL`xyDZ=ya>%~CaiDQ){hI~utG+DRe_s1NN*+0YxS;V z8DLq}*E=KFUxnW@&Jy+cP`^5ktgN`HgG~5?d2|b`30^rG-+DUP?{daaQLYodMK9xt z{#g3*dJLhlPzt&V2f;ZB>!S?q{vd%TZavwt=5{$b5q) z6+67LU%w2z6qayu16eR2&uUO!%(aaeJ-;6k#ldQ-#=P_4A8sSGqxQJx0juM|ky_r9 z4*xB(bJ!=q5%(+?HMmA9b>KByUq((ak27D>6Ja01OH(=b8lM-INeeGSyuo5;o7=he zK^-fhhztHgVvsD%oH)<36MhPC!t|=*CVbym%5JO8TClDku7_ONVeY`Y*pKdeEQ)S8 zdqz`hF@S)uids@eDPbw?4IcAyOS2C7IHFPT0USbfj0ifRBgZMA?J|zMF`l;w_JN6v zH~G0d6ZjD3dQ~4?SSwbKe*N*kU(x=kU!|x=0ue@DTmA8iwJBu6z8Q2A27|vA$(ZAM zs8-6i=v6Xj?3{rXR&#E`sXFP~!q{kX_ww2#R^BHTR8$dM(h_jqeXBn_8J}CDXo^#N z<oxtwvOxE( z3~EDJE7quKTI(?>=hFcJvYtV(Lpq9>I_1|*;XVn!<*mNh$$H$(t8+nbz1h`ZV=*!) zqybpSOXd;I{Me6{*i8eWSGeL5Hyy(z+ON#0SL$?N*ju!Q)aoB~yC&gdY0;T&jSb{D zmu43uju|1lPmk`4&^*8+V-b0P8$UY^b~cEpVEz}91O!$O*U&Tgj&L`|yQ%i`6&_DY zTzyI>&(Jm@3R$%=D!|=UT6tHy718A%RbkZ1*%2x(>UtQ33L?BJ@VYNyYr8QgW9xyd zj!+6Y;)(3GsX9E7)38j-nEDk8k1qHo9<|@jhMD(VMmSI;s!V@04AMLOWw2_o%|$2X z5M?`n?dtIH2C^F^RZEoB7Gu8C-$%A2R_(!=ewlT^TVb@AG?b>nGAb4Vh=(XX_2IHd&I|TE#Vmh{Yr!`j$-he6Pm9q7a=;+ z>+5U5gO&o_SQ6pLDx!i^?7aoj1(`H5P4)HfHv5;01te1o=n8|mDiyg0IRaj5$-@{u zN31|x4JFNr22NYNFp(DlPz5W9vHRhYrQZ0u_c{z^1Bz4-BjU_cuulP&Wu>@#tIj2b zaIx7#!+47CuylBL>CPHtO&X_9PU$3+19ha+p|xOIut-FHQFIXXHX2nz5d|g?m~r-m zl>|**^&ylkU*bvaHwvEGXy0XzZ?$6!jZF06esqJRlf1(e%Q6-;cQqC>47aeK9Dd;l zpPy|F{&iUUe&G@>u`J)!`lGg-`aZs^kED^g1=>5jZ;u(Rx28WQxM=1PaqgLC5cCT4 zqX{NhI-6>5nKGr?=coK8D{edfO>oD|iv8)%LNme3@O)&t#rK7&x*x)DS3*yOb?_Lc z+5>i=l&FN?$V{;TBGv*y-oZ0qZ>P^}dk#{k-LrZxVPr3;UtwOPoM@QOhVi_e@Vzv9 ziCx%ZolME(Uix+pRRj|8RA}~gAnTe-L+qKvGIF{=i`IIiB?#!zrb#++Jr)rfG|YIb z`O>&ABHGM;@lMcGZ>I8S`N34RXKnXE`&&X?PV<=hW_C1PHRt!kG^wsACr9u*D^Vhw z>8pV$=b4w;$l2xG#1~1iK!RNU!kcw)oBjp!;X;$;6E}BblTUS`hfU=aZBy96%mr%q zM1x||mq6{Vwz{PxZ;-)1*CsF8O_q^1CLTE$YHA#TwwLSvNA%h+qXSgne#>&+=g3j* zdkNw_SrY7cSw=FA%{OeP{gy%Nu&IX>o(q$RK(lXmUi3=wd@|#i_8dVo^UL!uA+=>0FL>(8Q2m*cI;GYQYJ& z(Nx6a5NE-~3J+@McXUbd@GNM|t>^k{qL4Sr+eFT&?Fek$TecAH7nfay)1x3m%V9tG z?8;T^CNL-38~jw~Pn@(~!x4eYS~*go(DR!1?>^|4vd^|YbX-lmL<>N%nLQH4yZiP!yvlOeIL%x(ACqUhTA%q@IQp^3OjgRQn?gkcZ`a zY$A5|)NZM<9)M((K1F6)1;S-XIDH-2?R$xaH-&Fs5C11x{ExiJAHYn1*_h8MV59t; zmF*=2{5KM(bnkTeofl6uNz5OknLm2qKVXOeYC>oO8yd!hi&jh8whe`H;U7l*UmrhW z1XO-qrSppVU*r4%%MOTn*=_=sZEF_h`{P{wk6XqhM#a4h3F{w-&--7O`S-v6|Dpb2 zLBLk4_itW$9L_QdxgR70v382#W?!rlHt{c3^s`N@9YYuR?-_ZW(cVGZvoW%|?tJ{q z$?JS4R@n1cl~M#`H^k*#YemRqfg4IG(!4Q{qzi1*rO~>tGw!J{t)Sn{Ht7r*x6oM* zXC|*R4yRC90NBo0ff8$xMnMb!vSZvzI1=1rbA%5WrgEEq0yJ**04V5a9idgBEtkgq zZFl{m5%qs2?$9E{{W0Lkl<%^~X2#`4-~SOxHuH$_iCvyuW&Fb_GjKBa-QIxUW^GWh z9W|#tyQ1ykqh6qg>p-bwNOb4`aqkzdbna4pLr!zqZ98J+Tz|#@{f47H;6?U+_)apj zV7&|InKM9;x!wOoa<1V6!^wJAheDDGlfUg^wwI3>#dYYoC8AbT=a0DmKfDb|j9JMo zS~!&9!#?5w-uZ^s&iT)S-%4G3F%Mb3Sp&!jE?>Zi0~RUvwydcLft8*SO|9)InI9={CE`&_2rC%x~!**5pb13248OYWQop=%r6eF z3!toS3O<5ECFZ5~De_kCa2b63m7vQQ23}n=cQKlFSm2&_^F62#TJIKZY%k$+c;7xI zW)8QGzFlq;$=<#ZHLCT<;?CgI@@Cgtn;&#+rKWQ{-_vq&SnxTeYmeH?2OSC;y z&?zUVk{iQ4gPvOj=k_S~)uy^^o=)y>mBukD&}1dC(&vk@?QPdA_}m2=G`h%$4yVgp zW{9yTbNBevwKNU7FSq%z=qrDTqp%#7+oF@plj_DL{`7+x6Xsh6w6*A75=!FcM+1A z0f!|u@$C^aLw7-^2bWoGp3gn%*4pnx-&XoW*JknPGw^^tFV~e z)EW)-DB9&}q;D;;Swh+afd!?{8eU`8_aC)+XGO$#^_D90e8h=jn%| zE=3l8%>Ix}3H<$xGhOlV_`gg0K4+;5w5m?>&C~6iRow{d9Bn#{KtbE^}Q$1VW6X~$t$A@?4%Q= zP;!B4spvKpP@EPWWouHSz5#+8S|14W9xxcZ_mR_YVfz$6y zyC%z=VqPxo3dl_cJy5q@8xGW-2_DN&TxGfAJ+rL0W7w>=ICF2y!k_!FIVlsJvQDN| zl9bxK_v=xSm{(X{7|AeNj?ge7YrtwrvCk5=xE;P&9yl=0fM+8Io#8O=t8npkQv?nO zaqPvQz02n0qlH9tpG)~%Q)zmGl*;NesP45_C&se|i?Wv=rn@yBE*?Z;tOc?VP{nFP z24l#~-ppR=37fsyRV~kWd?G0wZ*TT|r+4b$yZK=AAV`$fE;k0$@C3mGX_$W<1F?m* zIUoLzuCbf!Bc@YKd9_r&osMo~)umE#C;1*&?r~f`+5(a;9CoRHAB8` zf;&p2PqHv>oXB;K7WtN0qdM-);RGqeX^-k`FYq#bl4J+q`oeAf$M^nLzxW} z_29gdRfOsK2GJk4yQ{ar zR1Bu*zqA0LcHe9zRl|%x~+a!gLUb{w=8n+D7m>n zA~)e=ZSDz_-?rR0CjeosM5^!>?zCJX%LBKGrY=WG(rkc#8t{?wI$^Y%b_ zYuaUhfMib#MJb=hkX;;$%oFAq_6Di(C6bv>D*8qqFN~E5yCTeYrYM9M>KK)hv`A%T z=>5S@karQ^kLV~K>S23SKtaJF;7nuHr6j+Yp^BbR`z9i2beV|5-elQx{f2`Wx0~xn zF`FmrjLxb(k5jnJsyD_d6#+rYq&d8_{%G26j1o%FJ?r-n`}6AlU8RZ#%jcZMag52X zSq}Y^aqZnu(tbg z#uZHtI7cMQeNG~OoZ-(O@t8gaUY}w%2Yr8$5@L>;L(PXwt|RRoGW5yshxyS-UItcs zFTRPdpc)^}Jp08OhJqZ{0L70HizY`1V9O=4@1LD~4$p`AKW9h@aO0rbjC0pNS}052VAhlmOD=jjZgKlg z&rY9(#$>GA#iM30ENe1B%ho_AjhgR8)X}fe_4@VplUtwtq*8-cqvA;`lln8itbqC| z>=8lV$vM2SQmqFrxcpJ~f(~&mf<{2yGm7+sbt*;iC-{Q}|#!-czc1D}!9c5D;Fb03y1|wRh z+(jiZT$8^dzAM1k?9qqIs)5b~QyNqvOP-F#Zz_yf(^WDPA{muaw)NeFG0~hj?6V5S z#jjsLWcm&Awy%rDLApKzW|bmnLo(Cl=BS9bY@aSGd%w^G3wz&@vn!b_mRwsrM;xsk z2%Qc_EYlk{9O~(g=rEKUR}}GZH*|jweEiby2|LGGr6P%a5mJ`riRzp+X_6ladzxAC zE7OEheoxsCBHrJaLr>*DIv2R&@}7!y*{wzlWJ z&J>R!zp^{jZFXL9QA}4Ht>vp%BE@^BavNMzL}YyY7WmFek;*pM6wD7&@T%Zuj-B6Wsjpd-C3sO+=z%PAONo5FXXH)vb#vB;+*g9u|YzvU_0+q!E?XM^PvH}bL# z-FnA*taCG?^PSO+c;|c``vs+Wk0oNI`=5@8_l3NUqjMz}o(e>$4mD5Ek`yDhhUM9_iXs^&%g+QwqGY$QnL)QS7|yLx3)^s2EC=zWsbtEYGu= z9e52-_=5=LinAD*bSjo7&aJ0gl)hR(@I11zCbG$27!q~?VSc%oCpC`y5s%nGUs5cz zH$GL|Yup;r7v-}3UcAnRP(=Ryvu{D1wBl_ol0KnCtZCVn-P>k8wte}ZIqQ|}T?QYp z!~4hIQTvA&EM0CU9E!<*{9G?bG*y-iQb>|V&;;kZ?T%Gn|CH9&L`-MPJVMI`^@Lel&i6GPZexm>8KXup6i7c_8SwRhl=wEI}i6EQ{cK3xTW2 ziFp5{_9_mVVjFd}aJVqGdI_#rV!Apx3e4n@C9quPvI>eq1_o9cs~&^qzfO0*cq){4 za}|1;A4SaBYE7iu?2t8)aa?0}G}-A}pS$LqAD>?bO06-@oqPSfQ@@xLg~K` zWpN!{b7K*{P2b36M<)jxOW@p9=ZB>)pQ(HiaHvl+>bd-t%2OkU(gxGoDx9cJrJ%xsxF>{=aNS9avkv z2jAdDZFAmpP_&(^EVq_>o2Lq@8M44T&GwxyDwCR~*2-vfQmhm+YT+kTIScYYv_amF zi`l)wr7%fiK{z?h0^-y!#|^g1^m%#40T|+CZU{k~JSo%bRN)K0@f^n!B#3MNAw|aJ zL@F1%5yYE?)%1LlP(f#kJVuIYLtOo>u2cy@SNc&L_|w*A2cqvl}`y zwbgje)9F5*YK;eP-UJ5-*aZ#HM_8lZGUQ)<0P4^V|`mf4)no3$_8O^R|2KeCePx6eDUu4o!` z5N>hMGd8o;Ib5nihxe2k(FVCaOHj`8rBMna<{BKrRs*f2BQ+(m-uGK1bvRuZG|GO`Mgn5wv*r=_YoEDAwGX$4Pd9jp3FT^0sofK- z#%yC{#BZw&!&Pd&KD?spmOlAl_ZX6KDR{u~nBJ6bwq)9T^Q+|6os~vGt$jS^9de=a ztG)mBm$@HAtRF(AOW$Frb_dFFBi%Pg&FU`hBsx%9FcWh5y5lYv-C``G<-^|I+_~HJ zI?RX0GqM}C@&??IqT{6ay?X@bAmycm?X;RM(kk=5i@JXg*<=hZ3+R-xc$7iVUDj^$ z^%;?uVrPWgV)M)k0Y#M*qQcjTmCIi=uYdJKSG^ty`Z=3IlT;xzU!8PuSmhKXSP)0i zF9N!e`VCMND3Y?$hkI$R*j-x|f*7H$EHe%e_VdzC58dw$Y4Z%Zaiq&0Pa8>=!-@M6 zs|VMKkVN7(Np}i&35*C^*HFIgz`xyFoZAJ~>#l zZSxtUSENp_O(qRQJk_LJug|s<$WKHAq%*b4QhspC3iYf(PqRX|5g^nOU%?iG6u zK7K?3%4HY!x!GK~$b*6UZsiP9{f)Hx?kkCO>6WP5@T(6J8D*uZ<6(5AB`40x5n#c)ZA2{NsYOsOJ-7+jGQ&E1$Ri}WPG z>e5xX1&Ku@P<+2k7mH!B#&IukLrVC4*pzcG)XLzZ+K?A@y{=AmQ>AjSJ+yg@=6qHl zZ;pDWkaC zj>)sMK6F*SJ4D#Bl4MYghMmSxsi4dsTR;02;E-@@t&SCV3xF*gWoHKs8cJu``d#uT z8ZQ(qY~waYeQ*+Gtq!71#FBc`xQkdvqR~5NlH8AaS6L{_?n-W+Gji)aWDKR~f3nk) zsbD^28i;@o86Q7}Hh6QHq-M;yybmgPiNj1^Q$GbLI7BBG2Axor+f;;NxD}@-wLIg> z8#U5)+57=SY$jz9_SZUcnZiZ(8qeywUdeMFbeSf;Oy&9oG=nQm{veVwJ)qyTRv(zX)R?e_mpHJHc(*m9Lq9_up zbwyr8+LNsi8DsI^wQ;T{KbxyGkGvE0P*ctlai|YnRyr05lDTcGHlJ7K@)NIMiyIz% zLv#6{2i|2AH2fcLmo{;Kbol7ngAViCM~<6&cq%f0pbw2)7H=J$O40-xzVP6A56L_3 zm>n(a;$8ERwN{J8f;d4f?6Vfzo~imLN5tF}M|sAe#7}9(h3`*js(rg`#*1HRbmN}B z9#6^6o)l5Pe4C%Y)vKI5 zBVs?O_x&A?S9#g~T&>NE#qaJhZTG?4#80F4y-DI4=~JdZlC?^|NiY~!|$9o=e#-38$R=4?zz`m z*SglVuKZr%5zKV4Dm}5>Ej~$0u8lri5WZ=`!Jv&)%{*URP^>|l$$^PasTxrihwC*5@vn^LVsA$DQJcp!|Us>DO9Xlr~@K%zth>2%SETZy@p4Cz z4`uVbtpBG^%JIyYw@n|RU8v=8{Y1SHxZkq>BD>ssjmmQFM=_8sNA-I3WD`6yvhh|I z<5lY?0>qrz&)QQ_cl#FC2SixtVsnVvMuW`|+m{UHoI=&5o&mnc@rq`B`bLfj71R!4 z{2`K4rwo3G&tskFu{#lOpo9j?%-Ah%Xq%mku1jR>L(MB=a7y|#JmVV zH!CgT?#BzHE$GPHe);mOFW$uK9q9)?`yIF2mHr~!Kd@jx2u}=$$M-QmeH)gzbcqeS zh1Mm$L0)q~+t{?l#LTy%Y(%oWE@OQRG?TI1Jwx*3uEthNUc6(g;Ulg12D%QTgo!ho|2q3?_^6-T{3!;e26X3apedZ2C`X z_+y1tvC|udzy_#_!a+%+p&hc?!|=Kr*nZ@xQE}XFiZF8CEU`+KiAEy(gQmUWop1zFs`G+rA0C>2ll#!E_gq-|}Z8boUl|7fY+zI}8*f?4Vu zQJtX7%HrVyXLI5A);sGqwCrv=RxKIVgtK?x`8+tKF%*w)axa*?HD`X(XtrJR4Nlz) zEegic@zERJfNNn7XK@#>O*D3R89Y>u)#z|_w{D6wvzawp^^OuqMwy+>q+*qCO7y?V zeE!HfDa6@Q%txhpmAkp3_juyf&_5OsDO{Go-U` zjK>Q)MaT;-Wk#cZ?1@tvJ!GHb-2{EctK8z&Ne~;8R4jS~VN`a@N|uje@_Wdt(~0#9 z1oojRo;i;#%bjvo1PAAB{;Y)#W2p%gm%?O!<=7c%b*i4tLHOiC`*z_bj%E9%sBBV@(yOk-|g}$Msust3#ijB-n znnlL!C}pE&;?=N>o3vJ{G-4pI0;tbG5!cdT!ytYZL1J~>B=fQOmki<<%DR)N7-1qY z7AT>m44J}(Yii^w_N5aGk)WMT*AYis#0YlM3u&vEH>?^72F57ohQ}oek&A6v$8g?k z)V9!Ysoa~9?#lrbuCp&HfFH`ohpvQ99%esBnT_YKmhHg$vh=qeky-TH z8`Rj0OpbWnSq${M$(^>#OAr=9fv8~9Yv^SQFlxyqYkpQb?{KL_or#m3qeMpAhBT--r{SmA<5xlJw49X(J{`R*a`Fq$$nEIz(5UlzqkcB zNGghACF%v(mvt$BJuagAJ(O;%Ncz+y%a%8u*OYGwzxns6-;U1y?=yanmkYT*vt(xP z+PhO@YKq8*Q%(9JUVS3mT!H7htLt*-j&-v}Z6T=SVzN>^N2Ny=Y#LK@& zBMm7Z=tQwS5d?{b!CgxAUf)SeTUl9tPSEQFU)ALn#WK=C35?ia3Y~PoN*F=e2n+8o zlH??nmh86X{Sh2am*&+$mvx%x(G!Zsmxbd~_Ep5@7mtT*Ti)8hr&U-dJn$zp@G(^S zE$o&tq}t8h9RC zct7{3`c^R@s0Vc~g2vdJT|7~5|HfZH3%(qFZ%f}+&;9VLc`uW9fqHaPP;7z$r5yd3 zYKND^!Ym86T5fcL&QPG`sijuoXFhfVJq}CkX!Yk~#kS&GKxo)lDI!JZ-berO28;c9 zyK$RCkJ%c1`t5r7E!IA5v%X8TIr9DRn-d#m{?W{5ReIw#2azYE<-45bYZd~JThZ3@ zb@6-_b8{-@AA#_(howgit?+4u`X)bmN)%=O6aP!m5>wj5^fXwIm)j7!Mq^P5V^KNg z#lzf58AXDkJ2U`OSYn+$kjJ|J=Am()^-z}I>TU@msq;1^67HUr!m6A9qe|;DDc<#fRtkvG&#eM zuQ$%s<^y7{2ihs3fX4<82(R+jUMg<6k!!RZ2E=k$--Xz|>A!Du698K~=bW&Im4ifV zvYFXYnR`C_gfeyk|4L}0uEhJ`esb|Kk#w{%r$wSBATapQ5bI}Y_ z!?#TnaPVGvolg~JU8)xhN#JjQB0DQC7qfGi+)w3lF2xmB*|k`13BQ0nIt2<}WA`&Z z#fhGGIzZj0!Nc|oT=7re|G9IKu=dJP|el##ykfu?R1zxqLoiGMdV#O=HE-D6MC9cMkr1>Kj77Mw!(DH*PSGA?h*$rk8fpgr1ulum1YDpqLk#wmPoXe5Hx z`uxU(JZZyj{>v3sBo;F$2u{-UT>|GDevUs-1 z$f_Q?^qz|l1NzW|FwJpDyEe1&5smC;rti}4)VabKRrePR%KeYV%LPH=TGc^z+KYGkR2RM5*eMDy5?1=L}RiR6dJHa}WJ|D{0QKSILo zQaFy+ob%LOZQ_$iiw_tsC-~0L>074hn`6%Wxbc+Um&b5cXw|T72aESRUhO5 z57$@9_Qy!tV`7~by}cwO#>OS2iYZ%SR0KR&ki8q@#diaoqDTZ{i~Q{!u41VMH;2j&*h^6I$b!$Tv(2rw zfWmVNDIcxV_C82et{#cyYiM=JP1FRx>N(JpV1`ik-i3xzyl5>c5jsBv%7u=dr>o96 zKD3<7GbSSLjqR8oyI%!g-{HZiy3tM;KRTEx_)KDlmd}oa9a%K`4o$f}Fq>PFcLz;W zEQg73T~h|Vh0QSjD@o@D_JvIIyv_-Nluz~j&LliH0Dvv4Ioi7C1~r1r76;wcsg*xW zBZ;Mxzuy&-?AhB~MGK)|QWr-_`y>i=WF>bM)FxM0&aFr04#bTL2a}J6?8>;`H^!u@xPTUY!)OBl@=jx%QlPIE&01TcklN0P*7I$;fOY$-vl zE~X_F$2Z>se!wk3n;0Gbrdyx;aL)xWx9R}*4QyZ|@g2x$!7YZ6kw^>C6DT6J8~-oy z10-?oees22BXj%WSPw}!5nWC5y44<43d32FeTW>~Q7q?aaCU&^F3bukX>%L%M%nOJXvlU^Mdr5xd%xP9T(M5KBS{~_P&S)TUQi!K-Vnj zF|z8U1)Rc;C+96UU67NX%YK$Q70vGnx znhks*qSMxUwX7I(jKwBTXTex*kh8$Id(${2o_qCXpo7ouP%k;}-Ne5VP*SD&?E7a) z_)~Z3zt}EUeEdOV)Z;_KmGt5-nWq2o@fOETn@Nyf@x#!_zYg+0U$^kgx8?uIyY8Ie z#OZK_zB`rRzouRVGHT3(+8KS@C zrpLFnFnqS|<-Wa~smWcn2Y%QbDmyCkfsjPGHIR_;%}*Y+Fni9su+0BBM1M*6rr%rb zBX>XEtZMK<>C925po?9g$LrG-1c%#ax4~;OWj9ezJ8%8|`3FtK%TrV@^zRNj5et?< zgBvraVIxoQ>b-4*V)Oh!-)@lv*y;(*^Uuu^kncaR-6Q^|c=*e?T+qbeOOn;2+;?Fe z-<`HSXP&U%nXwuIJ@JQ{Vjt|Bz5BP!sQJk&piF#NWX7V)T7lZ>WPn3QvUo zlxYUlerVUvubMg~_PDX28|L#$V!Fzj(4YLC+=(L~BwCR6GkPpg$?37w_S=0U-p)PUj zzELR{OaDa*{@<+Se_!?I-uho!{TGY;FEjkV$PCo);AqTRCqzokW1zA(q5x*s3Bac` z>c`y10{M-Y_xFSwGp_7R*R-sqxo2H1+k76~g0Hc^!?eS*dFu1bpy_u0b9|72YFiK^ z^HN(h*PGTKRZS`+Q?9==C>osQ7oP_)0BPcpAGnQAoge6l`(i663A2)L=*RHT2PpvH zQ#PJbH_&@!4^*k`IlcfuUbc1hD(x@g@>|eIp4E;fgJMVYBC+Me0om@;cB%y)>Umnb#uau*dt~?TOh!7hZ|mjn#!W!Bp|`IVXIS=txWn8y z=TAK;xIpfu=Qc+7#Hcd}+R-^=$Q<*{a&itP)GQSW9RLB#6oEs}ATw2Eh2n z1u$0HWgY?2L1gM*eWK>yb3reJ6$NZ(nSbIQAE}qlS3fZn7xFpy2EfO37J8#rp{v{u ztyrsN21Nt(#-A!zf3yZ(miV{M>dSn6I0W0Os}EhMq|jL)q!Kp(&|BCq6S;?aqyW6v zFQb;j8a|h%mA{6&>r3DKRA_iwMz65kY1Rlp=WzR>!@)-pdP3zMwd^Hk)f+R4R*>WP zT+iWx6``vY1Zs;vs_$pm6UlFn%L~;!Yu%6`FASiP8XzGSMZ`|zr?#)>>r6z0u4A+d zzmCZn22)v0@ktE|KFzpaXnFJ+GIcGs?+=NW0$2tP)!ZE`l!ntKOJkz*+tWVNEjwq1 z1>@5<3xNJ6QAN{;)BT4RUX@zQk3Rt{3rH5(#{(?(v1%^bOTFpYL0N%c?#Kc~CDrebDxmc?nh=`|JE^gNBlj z%hIQB=bq7Nt&2#O=q#_3?SrGX+fI7?fSXz>A;fI1c=q5`T_sPkkN%nIPlos^@4%?e zs8=2k4RszydA1smSOi{OAH7bC!+E=s)ArIPEW9C4b+USFap`{f&ZuR4y;JPBVohBi zZKymM2`5XRf(KSAWd1z##qie%W+gI(m@N4|A?&qFRR$H7Y1iH8TrxeG+dBYyo8q<> zHy^xd&43!fk&2E7QoxTyy3-O#wVwC z(Hqw&F>X7P)3RMToM-DKpDAf!zQyo@$WS>+oVEUhBFx!RRRKU2>=k~R^#|q%UP00w z5Ex4Lemm~IGJJ^#L=qXJw0XYpIbb#8`yP5p|U$y&w1;j#!6>^g3ZHgTjZUz zr{u3F83>xas;Q8sN3)3cFrn&97<3d@(s+j*z*%;Y#Hfm9Tl#rlGUkde@cCXx0NA-Tg(}0Z(>4I-qJMTtUdGb3Tx;XF}b&u+28~ zOf{EhprhD;_Sgq4!=)jiRque&Fh!^#nrW#UcGIF|SM0#ONtA)f(6;yKgY}Xy+Ior~ zZe3Y;t>xnm#|A$pj;JNDn@@PnvnA#+{JAm1a@I>QajS-n*dN6rRZ_dZ6-{TU7AFlC z>uEbFOT`v(9TwVFT^d(!*S?c%;?P%MS?i|%6P8FFFRUOC_TOh3_yn<})KjN1I)P-p{UlS?%vwsI#pHU8s+0G61N+B0yYup|`$~K+ zHeDuNc9ROXXNC$PvY0_Zh~1Ib(lvbxP%Xj^erTWI!ReWk{9IX3RtU&Y!+FDkg)}VR zVrR(M?9lQ7U6`>HP8Tb|%S_dUk^o$#Y-|>Ap??Y{YrS*C>5$mXuj~rZQId+WT`7ko zbhS8C4s4V?kheN+x?5OlvRbe6UTXZw(#J9z;%Pz&DwzOTG}R-mqIW(_=a|y+vcJl} z8ZJDp$;lS?FSD2o^|kxtOeH3Cu+nemal`00;5qFAAXh9Ek;THdY@SslYahmR7B}M3 z4b_WYm}UB0z%%{bvtKoq^lBWe_CL`)ZrYg0iFen}cY8B=Fu)P(*<9C0J|=M36U%O? zQ@(`lfp4_1h)=fud@8>c`@kwcQE|%pNmGw4b}p%fb}EMw%Y*N-r6AbFQT$T363@8*x{R3HCU( zn{*CVyE&A|NBKq^D44z+&pqo(>+o&3T=({M+&JKGeDo$8@S?8rUtaAhxaIN3&G#8b zSC?aHsaE|B?~`$j6Ljb`8(ETT4Rv2gbO8v%r2_*}w6m-nuc`r=)*gSqWi`BW;dK7( z4%(@ff4WE(>--gwk?a>z`mgRU$U2seGtCG3tS`(dN%{%Fa+n_~CD&ht>ERN2?~T%4 z3J#+y@9_S^5@AfLF&@FYZyjPzg{E42@eeTpci{lIn&4 z-MM2jJ5;(<37Tovzos>K3pt(mTWZ@XNeNS6`Ku%eTdP(Y)fMQ$Z&77dT3t4JX;j3Y zx#z%k{c{%g2ksk$cCY}&%;E3e2OwzL1|nS%zE$5)nQJafR!wf9o~*9@g*j`jHbqHK z0SGOdscN;C;rH%;mH%DgP>=+5)3)L})u}^}zE$3WSCQN12Hp_I**&%u@;ds!hsr4c z09v&LZ076)oD4B9nSOBcWFfv&@I27F2pmYf7vLE`N5)TH9Yclki9$|o`F_08!9u-` z_h0%GmgM$HYE<+S{`|zoyL#ujw(xM;53vQVaLYNL&=x_E6|4JvRP2;0MbW~muwiqL z3Wn9%vAQ|viLt=u&&%hiMxE8|31(VrSVU$3u0phEZNu!dh>5anojFk+2+U5~eZ4eI zQ2Jy!rGt6zJ*}jd9{>vxpR@&NnY#HJEQq-w2E~{Otdk;Axx|1YF-ui?l4ZE8P!aLX zeub~g^1aK3eCj><^t`rUxwQwQimF9N65WrNt3W(&{a+7iTxtL?7e3o_l=FZqUWp}J zd{#s*)P6cX^}ft)JXHaA&BE~$1NJ~W{AZ6|=+vt)o$&rpO!9H>~d@bDYpGRYF(y$>sl$CYsvS;*~Cv2;0Jp(KWexJED{3`tY zLSKb#c5R{-Y=mC?ba^hhjj?P$>*l6>7?OB6^>Z5E(x)DCB_&8wvP<5^qPTe`d)%x5iWElGaUTTzdQvf%K802Oxy2 zomKA9J`cmo#OVe!&unb~4cDvc{7UzXP=b-(7Qa60MMI9AJjk2~2MvmR-k(x!6lt@mClzH2}}kj?5?v^3Q*?%B%n z$0P%S{J33`_K?&@iE5j9cByl3bDdn{v`GWZPq#f-sZmN6?5=#XBrKZSC7G;2x#wr! zJZYaFStye@xXgfpT#t+xFt{^ZcB-5-*&fNB-cU$qea^xo-aE?$I$ zCk7vzhBSV*(v}IHGJf98^)G`}9n=7xy1%@Az+vI*rpihy9TiqVEfrGL5|M$Rt8R)9 zW*E@QZSztv-@K5hD?Sz`#vQI84OFN0b%-_q=F38?5p?Ypzf}4CfnWd^H0Gv7+kgb5 zw82@_;%Ye{@euIE5|!1mUitMuz3~!%On##L8HT@c*uJDD`h%e_ZL|CJs@43{f*~UM zV@!WwsgeIX|MV^CKc1C)iS>W$f5-X&1P3wFyKVnc!uii}{@eY13ifoRGFL-^-(84} zmysi5^xrFdHvIZB>scP%5Y4knxtQ+z-g%#+vcvA{Jd9AlqyO?+t{kp=A{j&a8{LoO z?ceB67U4zo&&1dAB97ff&30?oe8f`^g$A6=(Qc^&?hqmgYGUkLC%H5NK+ks!EFy`k z{!r6n0grrO@uu;;aq|}SRe!?1-o!$x-Aj~ua@Dv0^}n$De;#=jEq?F62VpF+4gV(= zz<&-cC*#MK=T?{|_8Lvq=>Oj44x25OvNBn!0PgDdtLNY=aUbg$%>)zx&!Zb0q>I8n z?f-Bvwl?$6@%}mnPrk&4stoUDp1juUBWz98{{9umu586vIu;fM{70YV6i6y1uOh`i z{^!SfvqME%!OiW5Nq>@dYisL;;}dhJq2BA~*Zzk36rdsQJpUW%YbFm+_LLrI#=^JMPa14R3Au&+soO?-T0rRit;xD%kHRY`jGBSqm=uMM2Vih^`>*!v+dZk(_A+1`M zrbKp|0(_4-S2=|z^YPK}tuwQs`|g%|H!Q~%A61mRO!tovv`>+i&71#NT3RZ-HA#F? zWMAV1A&>^`qRz@GKO5s@5E`~CP+mQ>9ONch1py(U;DZv>m)gzvhzc-DAMP8hWj`Fn zX*hl%Exjy=dc>~tYckgl9RLAEtk%rZ>r+97h3o>AL*+va2D)MMUW|hCQ#$z3-98msuR|(uokkDEf?Ne?l~Y{Th!<6 zxIN)uIagU`Ghe1=KzjViohA*=+qhlaS5VS%v4g1MnbmPM+PlqdjsL^-U~%xWYdLSE zub3g2yekj{F>6l}aplNONQw0nvKPzFKps8!(>L&0aUm74%UD;tTT^AjaT}`521QqE z_~>0^3zaONqzh^?U2`!>IAX_5rs}m2*@0~jbPtvyN|ujd1_Jo!Ze8#KpRh63M~Wj1 zg7Mz0kh8G{AFe~%2kV#fr6enn+9S)csWG2%=J>VXD)HwhiK}t9SKBH_;?XH9%XwVr3Af-j$}m zn!#>aVwjx-9<8X%bP?^u`QF)Kn&U%lU%GM89Qs1QC2*8;Bj)R@{aK3-WQP=c^al7+g;M5K@i=qbH0`HncVkT#F`8 zOm{E_LVS?k$rW2ec8z@}=|5$Aw{nzplLqcj-I5039S@jaLqcDC-~wBOw^UQsn~rT( zX(%N9bX-_?!aE#6|5g6WqD;0VP_L-^v0Re-yimKMoiuQ*slwcN_{K_4EG!E;ghMF) zF4d9*F!GX4zU*{H2%$@M^G0%&4jbGL8ODRpF3J^Q1f3MQ#T??cBuX+_GKSV;g&eTa zVoQjL0p^fw;a&IZq|50OBFL}NQ04DrZhmd^mvj~4@u1u#pi2KUnoa+t!r5m2dBXe* zlX9}r%+PWUxk*RAzT*ymRn~mPCHqO@ePebt;nfZQV+fkQ&ZCM$n!S2NMtS~GNaemaI7=Y{MeNIi}{Y#k1{yHH_cPp#{jAy|-yBxkd z;7)MlNu_CzV=;U5Qn=FI{L>k5W)BuD>UMqB?C9LPm$|&E ztJ?#BBKM&sj_z%xHC(5wMozD@fq4-0cR-QJmHs|M1j(pul9b)a%KA=` zD9Xg|N`uOTmkF*`KR;yo=Fk%Y*F~8E#5i|(n=qrKHx**5d|**&>w$Md^0_m0`S7SF zy|Hw+OD(rSpwEu|u3oW?Wl3r+e3Tqxr90YTe#HhigD{s@;BOeLoe|i^F#zK;Q+`)i z$TrgJCE2ys7}m!dyvTq>o?my~!qEX=LKw{ZDI9n8dKXJ~qrEM+48Wt+hpSz*iLlFv zMazI^ebI+k-wu8YL`G(icY1-+vSqIGl6j6U*rYsr%y~{q!O~l5PpW-6rgVS2d&2zs zyU{kGTi`V+fyoU(@s@PHrx?p}4{jlTesnut+&W}=AXd}E6(Tn^9xix| zJi2j{RN0qLT+&%Sh?t#?T)-~aRT|7@NI&hm9aFDU6ZQ<7_U+h6^~0;BEuiEKJ_Pi7 zX;`D#F-!R=k{B<<;<82RV0?OHX>{oCe-^Alq>s?y&zJjo?s7pDUj2Xb_youV);OI> zc|r9`b?Vfci0n#R`HyvnRV48b!}Ix?W9{;Aw&HzaVtVY`zxqL`QD}m5a95$q zYRBBhez7r9VEAJ7IhCh-&^0us4x_>jN&OtPxA(3qNvPt($B$<&_{)F+$9l|SO$2?h zp;~TrtRr-acq3%AvI$cym*!@E)$pC~JEZ7!{>$-g#r_g7dWH_b_v6)H?vEZhg4dqf zJDww|cA%FAQ7xRI;Tr6AG31veDJ6XodiH?dJVMO#J+tZ$qX=Z$ldBH~H4)n%V|7Kr zjWiJF5cA$3Vz{cp`aIBSd8U?$`8AL+;=TSI(0H`Tv|CO%lHpF9S-(EbnZ#X=x$?L* zN9G`3#JAEJHi>H75Z6_p=-+S}iP_6U)Ziu;A&>b;k9 z1!gcZOMm;qH&MWj2{G$kZ()`5F~}!OWzCztW_EZ&RFdd4v+c-@<{(S@d5U8ltV0#& zXybQ-%9r5v_1gdvOug#LCjM}RHKPzjT){`TK1BkeU^K#b^^&vl0qyto%H7=n%e&v( z1qHZT&^sdU6OAsLpOnaNY4n-S`P(birG7CE4Dt~`{j4p&Y@S#B@e<>~Q%>ESD7MQp zO>NY)FaXX~bMdy(tUpO!XKq+}*KU?({0k>=zxAddtK#)9UtIoHtv6G5KW*`p4s~(e zicy8v>G13XOy0j2uJV3&iWN8_(TiEy z#!!=`)t%K1-w;GqI62vPu*(bpcK$Wz_U6etrp8>s_Uc+pj$nc?38z72$XOm9dMo79 z@m8l4<+KAWbqw`2S0S|)cBlURLQRI?_G4ti@oHzbLNr&2YzS3smj+Rzb|iTgBz4zu zd7Yc|?3^x7tLfVv($lD@+Wf2r@FA}nB9+%%$w*xBk#!8ZLM+Q5d8&M2tIct5FkH(# zvjsK9pC6ZW7V7FQp~DC55*nX$;@-lg06VBQom!Peacb)P5ho#(;3p{DMjD{9B0ASY z{B7{k4|jY?a;wesH?$pHLe@Y*ag*3eNQ{!{=8?(^qj^+wLM~M9_cO=S&R0$&Mu{#X ztsFs6!#6ik0YGtg`kZU4e`y}&H0!&zixR&qQlNjZ00W>wH(^EW(2{AR zYFAxjSR$s1Ht%JSmI|4kdU>4JS)c0JVM(9rS^`;42ztsI8+mNj_MwY6OZBY$;G+xe z;*`!r_p(yh7L8mHiq;X|xwZ z8XPUo!eSOa+lagyH!F3C_=GBRM@=t%@u&FxI|6KIKq^$3TKgtF2n)*nCLvmR-QV3& z(*AM7;cnon{6}TVU7kFzw=SLJSD$zZYB|82iM#PpzVD5CzE(f}QAgIW{PvG*g-5Y~ zp4k1X?B`-u?PQ&|X5nfA?qze!PsiCRe&d~*d5iO8iKw?Sw-fDv8r8^!j8>14XE5T< z1pZZP!-=|k=|C;mQpVcY@40a^Z@&R?L#D`FZxcJO^*#*ztAznI(e_P19XMfqXtGpb zjlG+hnYr?F@^EEz1Jaw(2s8_ABVBQ4wMUb&HuhU?@#_E5?5V2C7fGEarw=0G589*x z-?KCupGHO==%nkBMqXYvlBOoe1aKNO&_VUJJ)0Y$Q$Y4b4OuVPIGgg^>bL5HQ<#ho zRR1EgaK3cXBmYU_>k^NuJae$n!W{1OHFv877d}@77$}eq3ucaFSc9r|$1u~2QF*@1 z(e^v02VdBnjDLS$HCkveSxhrx-pB3dr3B9uI;jda&wDclYjZ@3gIgP#gd3oFi#~;1 z1&y9~PUL~wTnpWS!fKGcW=7NAL^YGaJ#I65X=(91iz3nmabBpA-^K9Oe*ZxTzcu7{ z2IvLQC1Os$sJQioRffZZdEOwD6>gofAL>_g^v^y2Yi_)MyFd%El{n>Jzm8h>FNhj^ z@cEl4y^zzVMD1gb6GL5$BhDfqMLvYmZUq+lu&<&7gto^P|%~&U?YfOD2Il-G<@=KBL zo*g?&HB0=Be^?(l>gk?4LQYo?1*lYaXaK_^Ku=EjI&mRE?F<~mz}>2~-fIWq`3zi_ zcO0JxU2}c^QQe3E)vXSe!Yg~-qB^O8JD#`m<1*isWclM#dZr2ZnAZ(F`zfxWodk8j zEu!rKYLbYNzqp|OeZ*V(er)OQHa!ubyB|lEDXv}AeIh3Pe2bf@Iq*;*7?B#zX>^W0 z=LTBWU;So3iCbaqj6`3L{H|d!YF`O5z#vzV^12&*{Ug+Wq6^Q$#jhC!Sht)Mu&I`x z2hy*dInk{TzmTt0m=|JqA7C4#embQfyN)$( z06PTuLq5=WU#rYhhkg2^XA-4K{Y~WN7C?T#msy0@F#-nG@JnE}WQ$G~*FSRe2|)h; zPngAz&C<>0ZnAH4g#MQ4em~e$cKONVJCl+;?Myk#Jfc@G5@Ji3`?|*z()dBwhD$y~ z{I`{(Jn=unYa7LV#j~E-fa(>&s@eaL(lfM#p~pN?gTd{aXTCs`e2uGO>0QvZCT@29!mUKy5Z#?hH!Aehy0AKqx zz9>m*2{I6fvlXOKI?wUbfRr%+_cNc}X!9I#7>wEa5pp)PdP66-c7CVQ4<`iBqHTO) zI1M?*^3@{+Y^UV5#*367yk@;z7^{ve-SU7FAkz7p1i!G`UYgfopJ2I3*Zs)~^H4nq z>UcO^3O7%;rda4#C>YHYDth1PPzv+XV{Ho#y^QJXl*@aa7h%$uz~BG!=%W7p&gM2u z#1sf*#%zui^ys_J(#P`vh?u2-b5}m-5qs~`0*xXTn7(UrlDN0d34ERtXf+&mads%O z){JxIk={^};O*R=bbX00y&QRQc92u^8@JC5?}Tw*6-#&ldfqJH?z6!LmCdxZB4MXJ zR@68sC_-acmR^jgfL*4FbgC3`loA;-WkX}kQ@zJ0Q^(1Li*@ZA)zN2bAaf>GjY2LJ zN-xB$>u`g+(}Nq3)8B!B&A`GiSVD|TKt=~|LKbh&HR^UpF)vkC3Gxv~ruY9af2WDtNn;5i8U|-ZwsEBeENuqrOD}#C88cul#l$%#t zE29}x+@5_U;j}XwfKQZ3b;okTwP)R--pKbGGXOThyT)f9@eg`=2Yw|ufPv(xJVR(x zp+Te0RJETf-lG4%|gwb`JYWy{I(?6ZrUL_0@2(I2oO+vFCg)0#3f(9nLJ4Lwz5 zt=Qs^ovmAASN?uShw-$C+4krMP^&|tv@}wv&8z+6UHXRd$o&oTgmK2EjI~Tb;Ui02 zk*b02Xx8LSykMC4nu_?DG#S5@Nx!_>j)fH+B7-?nr^?Dy9Jc?BQbb@p*e?}=o}8H& zJd(5$F>1aM`AH7>V<|+WS19SJL#wC@x78@qNY(gI)3%e+_PN*DN^DM3j+CmDlIv)_ zt9`|Qa)3!_Ezlju?9vc|GKDW*X$1QD;#pAG|7fl!0N0h+x+hK-82~A3tv<5S*b-=z+fb-I`JHd){q{pmwKv zdU<=YB4@YSc6u3R2;mnxb}?(f+2AJvzgv*V5Ki*zVf)#4hdOY~)QzwD^u@a;E;$@R^vang9hw}TnEI_4>V`Mq%^531UPvpPP zpqymkc)0c0S?Q)27=^p;V`r7Nmc#0dbo9Op61O=Y;>XK*5l+tVAZ@C*b|P*B1;m?RoUzSIM95AKoRr^Rq$yk6?Od_3zteY)2Mq`W0Y2ZhJY(d9pI9HmAk?XN~{Si$i1IQ1c|Gst$6^~Na8{xfIKxqlLrl1 z=@)_XskmA|KoAP2rS7H_{W@odaf!hJ<}0-)EQN*bNvE6w3Zd3y4yWbLCznY=)@J8h z{Xl=Fgq=ry_7V5nf`pGJKC=S)Nb#3>PhnS!HCOTh9R9CSAytazaO-!50T+1K=T+h6 z(^{17?wQQil@=ZpeP7_#Lwv$I&X1aKMh{o}Sa+t{lIkf-YmmbW6*-@WC=NZOB!X~S zT-tw7zJSS?^NR)NWDI3kx{E4GOMjTmR-v^RBHc?Ue{O$&eSi~s@%Drf5nHUw<$dn_ z;2ms#GZ9P{OJ>wn)&2b z-?#hp3UWYepT61zd~x)_+POtOjS{~}tSrcT)MjcUd9^Rmd~Rz5|7zc|>ND!t=yF*g z5c5hIK~~ZmDH8iZJ2DfA*x=|0kyS{=;OrrPkfXg{9=A=SY%1^;NDK*W`+b?%f|(N0 z9{kqK0XcsTeMC(26@^&sj-lKOwSNwsZPwLA^4ga~vuehOx*x=a@-a4*$YAd1Y|FfL z(S=u9mIdsu26vPIoF?L#k(rsi19b39h(98m87Ey%SembxxV>jZM$Rt$^YUn8FO?S-iPX&~URe})WGirn@{mIj>L%)9I5qy3`Asb3%J51b& zKHoCKc-y$KV7<>wdA}P@Xb$;p4SS*;Y6Rio@Js_z0zRy+SbI=@ z!MPA~I`Y)`R#IdxhHT`xF%4YyhZ^RU(=8T#nkAMF+oFf>y=P0>Qi|+2aSGfk9 zn@e%tlT=PEAg5vIVCn0aW4pr6$ahJ7A@|Kn;+a%?v%^@HS9&xw2R%aI@+d5e&l~yV z{wVndJ-<6gTSN)b1@ws5!FcEAymYpiwkYH_FZMgy?w-d0 zinVhAhI?`zdQgfm9#PVCp07Qst~S7C*feuAQf{i48kojxRk{8GO(o{h$y(AtQ)LZ% zJwc)4j$H=tN6om1JCtQDLc zRgIPy&Nq0VRdLyAWkVh+1!ir0i}3V5!U+SgzuOY|^&engWt90vJ#_nSEK>2W@45}|%q~|bvF_A}H(DiTMJ=`k^#F00Y(uDz*{9mZutFTS{eRhh@mlCwbmx106b6S0#C<8sRp23{WmBSEXt&Y7&e@#ZGWmwBr6y!8en z>lR^Qje|$#HTl zGMk#|t;id7FYRaRT>92hpq0g|voM)Rmv95hR1Tt~|BJo%jB2Xe`$ZK+M8t-OfE1-k zSE_VW1nElep!6E0LkL9_RHXN=RFM)o(t?E|BE5tj6r@9dkOUF}XY%a*?6dbPyywdu z=Z4DmiS^#zxk>`^m8+PV#srn1pMHI--s2GF}^X{aja8q zcUZ`~1thV0Dx=yMD6<6%8#K{H8`YSF9^6?Pvs6||o>sHaOw_~8|6gMlC z|8ebuzi0I72Ot?&eQj(mWXgJtqa#i)E2p`~d5kt%_r!iw${Ig#Qg^F3E%>&@xP)HQmi7a5L_Q(f9DNUgB#PPyma0a}5g zZ|Mei)m3wXmLIqLDhZ1+Fz)Vd46o77*RZic2sn&^UZNr|u;lJk@jNQr*SBJbKsf#) zZ?+eM$yHEP{9d4)uV!J^-7BEnDC^6n2>Il<*{k&BS3@=D!51Yb;?p`!^aW;Sm!Yjn zuIof!t7jyHO0H|;AVqb1ntGvC$9i!2w!sfFTHRCeao@>R@7I5z&hc+oDUiN)hAuOZ zX*pBMt0dO!I->4#oq4~+0)cA3v6+`VMqqzV>9dB8+ry-Nk4-Mi*y8UmunMZzQUDnJ>Z z>C?<98bLpAxeUtsa*!W>iS~>;YV*LQ{0JF>@A)HO1O8D!?B36IYEsEp9= z;e$Gkb0>X*_VhMS`MiAap--A1ayiupBt>`TGlFxdjekI&~Nu4!-LDoaE2@ z{r|mdV%RrVN@B!Ll0r$2^3I2WVnL_59@Oddp7zqfQNB zrxBMXaUk*nK3x6;=vp#w3$W)OiVlPCbDVe=Kk7Dn=wRs;0IqnVHY)94glGSA=JzFE z|8r6P;}ZT~yeQ>XQJ8N}^2@A{zJ?7L&5`py_rmoMnk#AGF$b z>i>Q`@~P%vh|mJgg>vZU-9u;CCfk!8g`DT1p{lb#k~#SJe!zohmhsV!6Pb*^M}oep zul%E-Vb~h(YjSv@B=<`{I2(q-6w1cKXg>OFmxFrK#HKrg-)CbcdM-1-Y_=<%cOUX> z4QEW%$kRyY$=A$G0ELxmSK2P>$cA+5)aT&Xfk=q00za)mrCszD30DPX@2R6eezsYj zM`}ec|XHz-l`%sbt{{!2;|H)NI zK3ov7bw|=;ksSa+B1azt5Rn(PBsC!SY@NE3ZnBNs)*ZVS`~_UQcs|4OZ=gfv)BT`x zGsqv(hV#mtyV>WP=VRp0wNs^9w2Ocdb9}*WbWt)s8&=w;nTX3mRxfLNb*dZ&o9pgx z%&`RQlx>fdOZ7DIl1la=k=YZ0jdJ{EGEHs(1oSaF9c+03n-K9SI=jtk{^ipPOt9PT z*B1W?1G~%Ge)S6K>9G7+ubw3;I@PqCuls5Ullvu29f8ICI~Z)>Lb2Nlz+e#b8V4Fp zJpf*FcujaDLUnD8_lBjzPE6A9Ov=i+&p)`M-fWz@#jp?b0+f!^Qyve#Z(KN|xNq@l zu{ne&(itY}^H!MPf%5{WXy@r!{3KM9fJdXNAl>Pc1P}Q1%H`F}B~Pt}71;(R)R=5N z0zn|A-!GD@hnO#JYqhO*j}_WNu~@y3GfX@wA+5%v9Nn@FRY=&YK}KXoIDvT0zdhfQ^Obrsi*sI5MY9h-EMZusLt33|hIV>;W%MsY80 zoh{cEsZnn6^MZuyrgy7Uug)#7l}x+B8J$l7pcZ}=p%A=7aj77Y&t zdk&j5d{h=Kw`@DZZH{Otko_lc>@-~Z*)K2bPLV;`7eCyPF(g_3W0xXith%_!w0=n9 z`A~2EiITcS9^tPD(1mx+5t=`}f0(UiN*&^`M@1(ff)=%d7ifrxzU=xwG!3QlmZB&Z zt1QV=?yZzf2{7pZQ_aZ?{c-xtK5`A7vgXZv50BgS{4|_uH`<2QYa9l@(+5v(_UI@Y ziQJ2f(1g6UTQjd2tARjd!_Y`^lph;F0Yv6t{p#Wp;yU9?ZC-Bx@YHwZZrd2>#xTb} z7s3$^DK03)L|RzJ-4~~5XOe9*ZcbMatKjh&^{|~(2x4UXX&yG{HAi zD2SqTK9`w3kzBVF%jj#$DC#idM*$Sy)*prw2owD}$^icn^Vl5Clna<#ZGEHN-!+ea zZ`Yf(F>D#BY-N>$sde#N`%J-!Y%o(=nF9-hSj9ZH&au}J=R2{BZ`w=7ONzk)el=5K z%mO&@erXX!)YL9ZYX1CZWw+j56fiMh5V2z=JbwwJ=|p0Xm3Lu8oG8*5UQckEY#7-r z4uZ;@1sj?6rar9G>~0enRm3l#N^8(nrrrL7n@SFOB!@a?nG%272qq~T{a2U|ntk&( z2gY}y>hpVd)6NY4S*={|GMN4oBTw4W`u2>& z^Vh>(Iuqq>2^QKFNjF^D((pQso*(aYBHk9Z=-^8vnFUCCz<5TE#JAx+X&JDk7|fz3 z#)M#r?sw*2B`$YY7R$Aq!Y#V{VSs@p**n$fvx;_G%1;-w;3))4)ZlW(kmwEAykLERXzye4rF74saEduMUT=tOO+bs+X~))DdkDVJ zvO=rrW08E5Wws4&&KH5hR>0#|9FS$^D~?&2OWWR;%$sN^M@C?yZ3C9IEJ$CTFFIxX z&R%%Ow;@JF@->C_{dui-<(k%889S@Z&XOYRd(j~ip+*LpxgU#KVodw*ujY0 z>M|F;Br)9Zvg4(fn43)Xq&k4=|0wg29-`qAa11|HqiNdZ)y@okJO{U|-iz3T7Wo@h z2CP`lb*0b3_Q^#kY1aYt>vN-GW$3hBQ-@yPQN>8hiL@sO$JKz)>UZ%KDC=VNseu}o z*(BKuggpzMHL5=wnUB{jI;Xs3cD&7zsL+U8obZU`nUsx5C4umxo-3lCL^0DooF_<~ zUL7VBXP8|Vtp}*XYaGW-r_sBuD=nxKp0H=6J@Q?al@aj~RPhIz40kdjL$lY})q2R- zQGdL?^O9y+-|JhqKRq8EIci0H$w>H}x-M+c;_ZnY5&5E)6lEuzp|D>k)G-91y@v|e z^B@}pWS@AZaV=UM#v57d18)*JS41)rZtRqG&Ts1p%(H!Z7U#&K$8c>fFBtcTrR|K0 z(Sr65If|7w=B6QlU&byNwH2d9!O2yGttK5m^$U0gO1`sbVydjNd75eyu(KYzkgu93 z;3?i!SYtW-*`FzFB2=u@c|hG15Gucq_ZAxx^E8P@GTSq;0=#%cM@qYML@{iE`WXL^ zHkplicZN(S6mO332>D)wz6DKR%CN@fQuVLs92pa?U}T_g}P6;o={nIX>GcT0dD- z^^PdA5}l{iXSs+Lb~mvow0+5l)RW;Wl!?2WE^S*lY3_3u#U@j0SH>f$4C z->1Hwd;h1r@9MRv4waGY-#gpa+Y`mR<5w_xXZCh3T)Mf?aj6TeSNEFB6(3+-c^uxP27{Tx^m@|K;^dyu)yh@nGWWSjI|KvD`hkRAXU? z`df|QXY1rsE3y;E{8l9f!)U`KN)Tqyl`5?o7t>gpJ=_>X{Id~k&~Gy3u1RoUN-!_t zac(x)hrBz}eA&O4e%;30+uPf(Q{h;;Z^a$VaGTfNPb8ZVnO&3GU1ENesH0wfK<7dx zXhr54;f%e#y&AfWBk%U!RL)Dpacj43|45|9m)XO{T1k?_b2AT4J?>H}u|GggTn zV}ISx4ALUB@piPqdXXiID!h>dD>cV*2Eo35K`2KX>`ltnME+b|aj}AQVgZHVTRYHWwCQZl7+%F|8uIN7H4HJ(O0J3P*)0kgn2iYsrn2IYu_z^iT1-MWkB5 z*@Z@Vg`H(xzi#5kKOz`(YLj>DPI1BCSpe3U4N>dG(Ionn)k;e8?G@kAq0Foxi}V$# zfL*^bt*Mt?J3~imuzYXSl7(7+K^x?5$ z+hSu2L@=QS4~o9;peg!+dM9up&7dsvPA_>R7BZ%?WO+%`^Xw!P_pV|O+kV{ijWL~O zb)RT*%gZV=zdaUd;*`uEtL^h@`2@b?+%8ek84P(?j%WCXaspni`MbVgIKu!fZ&jop zX$9XmYSd7p^Xix06``gysQ}r^&leo@461+PFF- zHadjbXYgq*4+J*lHT53x2xkzTS#M(zx0m>}qzNILce3VZ&>7fdAG(&$86JAVXr+=p zrTkw1UZukzj~)+;vn|DKK*8IS5u0{4!VI?Ov265`?(Zv|$Mh5dQRyPt0H_(8fVcj% zf@UVGO?&?RrnRoW)!u6Lzq4BytAYDqvHC!Pqpqu7oYmq2pGNm)hc;|{=hdqBbdE)| z=-mGW4`ufKR$1w~+EH9H=`fBBc|B=j2fPCppy6YDWoQM~m0!#_WmG|2bcS=k7Kwrf*!rep(aK?x!}l^Xdb6;bJ5}>|LtCul zrwOCw(d967B59y~zYj?fQr@GmD;x5;(bk{u@ryS_=zh1Rw$g$}UsG`Nu3R&cj`#;G z4^+6!i^E-NE-UMsVH=ZgH1DC56lN;Oh!4x{JI*Xjc)fQISS05i#P5@3dhe)!#dnNq zl|W>0pxWHs7&FrEAG5sloAhY=`$gk(Utm8?&8uUDqbBOT>WdmiyRM}U;aeJN=ss4_ z!GM|BrRb{xx8sk;?{Ih0N4ri*))iGU=S`{#HqP|H2-#H7QK}82+PTg-y>oqhX`q7D zdXYq`iz=iYb{}=vi{;Xc@Nj}nBf$QmCJcaR%nvK8qcgoeXg03X_LgcL0Z3_$ct8{; z-DIzphhv+c3F|r9RNBMC$d|Id;A$7{6ziG6@*&W#V>g${eJAKttCoxlj2`n79LNn! zq4?D*IZvN@J>2C1<1nm9Fnyw^@VNb4ZS=6$wVP5a-Tf0PcW1ok=Wt|I4wiI|oOK13 zzk_PLYR_D`HC0ZA)G(G|;RV}14VMeF66docTM6T*)~5Xfa_SM8)e!8_7N`FV%Y! zyB55@G8bamgAAt(;6XAjq&*9nn--Hx9s`dVjTWB&AU5q1Itm~%>QgGqd%^b2f`0;a zpYF15%y1{x7-Wj3W|I463x4!_g$lLj;4Pi@Dl2(WJDp|E4J;*w4TETnKEq%TK&8A6 z+>M&21j%lKjt5BiijI3IimY5WT@ukOjexT_7+qH1epiu8Qht@avfr25VCv>BA*X?> zcW5RBxhb|zYwC?3DA4Bgh{%gB3(G_HZeEpSmcloZ3|Y;K1DGF^$EjZ(Di z(?|?fMium+D(MuY1JIrY)2Xdj_j_;Yfp(A-5q@F$Y+vENgfY3oKHf>f{{c{9StuIj zK2R<&DScj9I4ZxG_{K2j+WpY((nA@o9Uve^b6p<%W9t1&K7LXxXy!O^uOt0}lWmLl zp(i>g|A`Agl(IJ;9t2Xpjjqdg7>gv?Z<*qhUTcBOx- z*)#VOZJH0W_U|9{Y`+*?GhgM==>~WF5 zl33itj+? zuSu`(d$1RJe=w|r;O6Huh*JDFlfruckT<-I2dqB-eDI}O0>*ov9vs3wt#Yua;jej-lLDT2o$Oip!zV?~8*oV6(_H<- z=ckrBc;Y`ibYuN*4p9V$eE-;YK%@V6*i`{%6vtphbI`f_96|F`d*53E1ixD&t)JFv zExf($n^bxuTJ2FXyXp{MjEwORD-*}pgYQDn%)VQugmooYaSBNIU3dK|P0G!y|;P!@;*X%IH7h%-nBl$eptH z^J@kUI7Oc#{Q&z3L>M2@K0dN|2d?M6$@c05p-R7z{;fy9ak8Yxhd1=XidosmQZ*Fm zrgOJA1LGkY0|(w4R+;@Jc-?{Yh@7}0v>gAG`@&jgxlI}DWTts+pQ03tlxLCqLf<{$ zkeI=gkv%Na%#_^#z&C!Bj^}MD5XS|mt`k9L_Bk%~ z40Fj2bgvzw^AcoxqeIXduz#*I-_8f5<}DG%s$Jq&#w;5|Xn~cZaAvs7@>PcRijk=G zfFqpKs5|l0Tz7h`d4MTDLFK?q4%RsGecu7_zQ*r0ikW3B$eGM`X->SVrEkvwQpge; z)?>SFV%rG}^R*$)3IXF`aOa6J_UhuGwVAk?YNRDaR^&`tFm#M+&+HuRLg1h9$zOLk zh$d;hSDO1=v}WEL7H*k(&lqy5#BYGqwJ=iEeHi`m7G|5C*-@|!gn z0MR{1a?w4DDmit!xCcG{t#9b_%m6N&Zb1OVB*(ttWFgymta@MPAK6RT#nq!PwPUeM zD(x3qa*UB}Giiw-kIA)-T(vok9IRCY%|YQUb8QnTVO&OWocZ(ANBp zIY#jo=^zk14FQ?^n#z%oG5rE{OnzR623y%Jt~zXsrqx^^pdj3fi@dYg`&>HySgf zeY7~$Q`n)ny8A0GhOt0kcV=w}&Q!P0jxcxTjqxb)HNS;7lC zvnZ}izrIr+(c+fLash}+rxC5)T_OZx%KlN)?H8(l)ulaIII%&@p|U-CkAzp9u1vo1 zs6nULXm9}7B6M8=^eHKRJ;R?sPj5j#)8$M)%qT84MnD)&C8mCK4b08PU>Pxfo z5Lw$po5rJ36 z)~IO{zyr2PDU0w5jpH`u(J-d|)gz*oR5@{}3XnqVoQIC(JGxP2h={F%ww>JKCzX>A{T{q_+ZhQ7VE&yR^Bbmbb0zT8> zWg8|WYOZG39a?K_rg&?aYyx&lreBMRE#L(094ii^uJ9QwVmB|xt z4z3EGX#o2`(vM3LuFs_`JEGY_cid)+XlSI|x89Gq^zLxxpJAK?FGm`OGUlBJwHaN?`&Cj&v*9kQ?*i$N2Ygeo}NdT&kGy z6o*QJNRYpC&RnkL1|QNK^=ZUIeZ}zyE@J|k25TSb;ksBkQ0p2|S1f*hyrY$uJ8IFy zcr0hWS44)CvE-;lF_LKyYZL>6e2ls{T;-VWli5)x@g(v0ZlCJi&i>PuZ4ClIA|aHy zegN48EY21ezgg!l_kuztPPhRx*R7f{Ud#R?<;p~prk8}Lv=-sCx8Y>c4fQSq@uSz* zsep$7)taw);;gp{gE&FZU|E|@Fdmc{VHLZwIWHs@ogC)k|e1B}XOVn4*T5B(=>{bp`?gN(qx^o-O@=0j>AFy{MHe#}o72 zfWqJfE|KUu4@-(~E54MCyU=y>cKZoRcfnfMmN#uM^N6**)hi+91#stM4@J;P32V!% zZCemyIVl#8N#F8!X+@?rvR#_@mfBC&e~dKE%y}>d>bhFZbjePY&mwnEUE;eAoNQ$o z3h%Ek?QMN^={mSLyv=bgS}v2mI|H_tdka;w2jMoVEFCfbC$Q{O79coM<`xNn<*>I5 z0j1O-`>)aa2RjNHXAsCE6(=UEPP534H#nSP;lCGlDIO5`qfUbVl=^p~;JH9%Fgv)L z%N-y~xX#(xWf>`BHdE+t%9Hi6Jn@C!$}E+N zQyK4dcG5HkJ>Gj>Kk>2$dtAUA;gqrhUVwG6K8oXQ1>&}&lOBl;!alFD-H@6Bd<)s` z(b2AkgL;hPiT{q|7PWu*KN$>a+P{4E;QV|O?1Zb_aOU3)22AJh)^?+cV|jYD!1LAl zt~G+m(M)tA!ZZVhGprc-pv!BILNvw+FwxvV#>rS0r|qoXqgx%6R8JAP=}6WSNV;?Vpa4I#1*MyJhC8!d_4{C3qDxQpxS@ zRJPq zz5a+HqvNBfuV-bf2*{Qg>@7(be3plq-%lSm0YL+}(P~3QmCxY4SaW}$k9L#(6K3oE zU|mfXvi`x_bl9sURI*x_CI~7`%4!gw*6zihKfhh{$v@6%wEV6wo{487)A#dt5m&9% z#zNg28FzZ#aC>GXyrtJ$qt0>o{krQd8Pj}?RE9fLeA`KTc^rqcT?a;knK-rNUrnfN z;cc0)6F&=e3ps%{Ck`m2?gL#;I?qJWi^t&gYfdlblw|jsg}?3rJa2KG&)1~0Ui8tI z;{|4ob(@QWsvr(#-$}ucVpw5g_<1Q&UD))SxR6jE-|*6i<{k9XK&^%ugP2LkX=Zj| z8$XK;tUmnVSHtnUk?`NuV@ArEvYM&R>fyp(lYZ+a?mqK9i8Hlx8S(X_)i<$|>9~w> zw@8R4;~RgHM=d_XbHk#ILTzkl$jAR?0AggwTvBSU5Pr#s>O9_^fCMg8)NS)Lus*{& zurwV`4I|<%s3K2Int=Gr?(2_H<(ve&Si z$)Ccif8ZR=`vG1T)mMsdszA$Ke-%G=0KJkfPyA9bXh~+Bj5t8G9@p9S_|p6LTJLjo z1Ptzs>N>MxRd;KiIP#q3zkU`US#_btBl7CkOH#voLtB*XvP!VGS92i-9MCBDznkp; zXaMC;p8DHf@MYg#fI!o6X&Z3X&b&5k7{co<s*WI?`uc+eupIu!{p&Vb7Ij00p6u2x)AnV)F;cC0D=*({3F+Dj>@i{vSp4e|cd3Uyzya$U*T(IqsE-KG}&e zOn5WHcRg#S=u7CS6Y?Bfw-sVy{AO5VXwO~Lk&k;Db3FQr?xhDuZ|jsFZ-408wO(kf z8d>4v>1E+%yN5j^#q{AreSplg)SM@l|37e^zY%7D?d<~CD6$Ek!TX2s@vr#_{#ZM| z56`=@)t)}&97lq-_EFp2NodKTNi(06j$a^t^LP%Wr#Q=U)=N(6z?O4=iS#pP)sOfg z3yogCHjM5N@3C!4HaQe*4QvCphV`V^um59e{s$J{tO15!hCw{hJNy`LK7iL-^6(ru z>Hc~o&Pf5(zpab-#qe&d7rIFaP87|9$n4U*3;oltNS49_W(%>-WG{I!=_y z%er>(G~GX4nKl4!mKy}qXb(Sz8C1;p|G6;#Wj=!+nW@-uA315yzhomrLdN5a#UbtK z^r^nC2Kh`i3;*^Gmw#>{ha$^7|IuwO z!lcpH)Fo9DB%*DF1j?4yJ2Q_@ApF0|He)cVoRl;hS}Blm(-N{j&eFgw-`t#UGcXoA#_I^7g(yywcbdi5d2n6>N!Vg2`4Cz{SmALyC z=B+}auY-wv&T%@yxi8Dq%csiUZQFX#=&bQQgi&kcvyp}>A-JHH(HOC9U^DPSVnbYpd+AX=6Y7@paT zFA11jH-};4L5jzD?(H z<%}OJ%w~B2wD@mxl+*h*t0ax+n9r5!j8^&^gRBz)h?pZM(007m3y5|uY6=E|1jTB4 z>>mR?5@bg?Ugu%l8X#;s>bb}kEXT|SY3x^><=eVx1Z$mZRqJ64uLm)6vN8%h34(Ot zW@M>dqpOAvZ%Z`wYN$hsljkJ!s`+C{=aF^dmqgAnkEA_VZ3}mSu5>WMK4xmWCqTv< zt`azpkRSCrN{Vo(W$Z+^YRZw=fAi`@t;+b;d37JK6UJ0O86^#8lxiaUev_Bf{gSj1 ztIxlzg&@6J1sglUoDi5$CIUx%`t;3mWZs7w#VxcnOyrAjuY5pFzT@-vbIN1q!k;{a;PTo#YlQM0LcAiajnaQWPOuv;oDfU8vFiMmwmaUb7NrOkpiQt z59}f|b`Ak40^cZKo{-8hO_%m5FGl7l)9BxTfjv^S%B_^glz{ff>T3w~J5j9NSYNK3 z@`z9_s1Eu1K3G!$(T>VXPq2}cgLeM>Nr0w1?weywJREcabTeRiSyyCS6@XXBG10?R z=`qx3lb{r_qxPX;^BEJlS*gWQdJq5;HJd8#?7D)iIjMMhZ&)L6MaQ?s_AI@ScT|N# zw^Dzj(?oeo)d~nr!0m|eKU}Iol_QJVj1Ydwg%J_*aJ}N!6ZxEyDJI(AZr$r|vs0m~ zx^9@2q;|uN@444rnrD}-0`kMqV!plmrgrzwuUCWj!=W_j%I$jZK+n`^?hO{0s(P-D zFN`)l`szETr#`5%_Eh?I(?NQu2EQmZ= zcB;;l>}$M-v8mnz=}Ia@PiLTf#CW1WFwnBZK(Y2%NsaSZYsr!*lbpZweYu~-?n|RB z)v2Nw=!hL+Zq#hA*r>K|kS~h}yfL)B=+H^WsvaHrRn(&uJ772J``GI^bO(M9&g(5< zGwIngpg0<8R%n*3y(vw^F7Xlyh*+=lJI*Iy9Y3*1#@Ys7+`NFVi^i+@`ghn(;=8sX zw}I(-FD$qn8gzbt6a|5ily;}f?i$DbX4UtUTKi}h{Y*^Jdt<;JnhW+djh#Fe+K~$J ziqC+BewVA?3)A1a{6b$KG@4{)k#@u0w^9j@zU9IGE%`XhXY_GGXh{a|YBr(j;ac@= z$&DxXj>YW@qjlhBrXXO61N6C7-2T3}Wo}3#Heuhu;JG~fXledfAV|6B0JGNWN7Bt0 zZf#DV<+3)lD9`AIosACMOOS3=>QU7VV5SjepWw{zDl|OOmVhu)b2*jofr8Wni&lnBerbL=X8%1?s99Z6zV}nd>!^c zazpK?i7?HTr7Dw;-n~^TU?PlAIw zRhBdHmG*rpp=UCa1DAEBzSiLMix*Wlb})k9kmi~q+z~_f^EH@q35fDx1H%LhMui;U`Cu~*bI_rwbaVCQ6H<2{p4NeN50Tkm1=S;;Im(wY~ox- z@%6psQMTRPUDt(Q9AlxlezC?#g57?(cC0 zD3#VX$vL6h&hu!+;C^`1Zrw!zt!B^Sv+dIS*hOTbN<44s#fNuSzA~+7tL`f8LbbWF zh=h=BYI~Jyq!OFX1a(D!TtTH%uSvqrTI`By@Na6yfkee{hSX3Ws!p%jXFpP^GGWt4Glo-@hb2Jzi{3~hw1&_45-aehRW;=bHf?>8y*^qgvG%C{ z1r^=$-Pfn-g}%N0e0Sze+3tMyqZlz~Bjnhr^DuE>4C3cEu8Oi!j%|P6o2lBDqs(Ek zphaZrUiHu7`@@4A*iD%x>!ycWm0}m36NtVtQlZjyWyGmy2Jn}kgJ_kpf@1zWLJ(+s84)OdExo|SJ=fSfl!cWr1-9ix;HtL5rk#gaxR)Y4tDRB zB^K@MV7kS&if-?OP&0l9?B&WXxCqIwoR48_MV&`=Kpc)0*>l+PLr%3M)=ss_r_|?U zA3sZwtZM=xT{!2Xb0VOrw^Urwt0)ANm{`!Nv{=-KAb;3!DFh{}9DVHtWvjmc;Q2&@ zTCMv)%1VB%QYJ8%RA3RQ>y0q)2gRcuv)qp&&N+*|lye=htCz{8EM2nmqGnIzGgQgA zdBR95Uzv@Zncw2rJI}l9yG|r%VKE6O3YUmHBn*DvL_0n6s(6=u`k;VcKn;EizmL8@&fk=$DMgUVY;( z@UoOMUO8Pde%qg@TA-Ed429izIyUpZU-m1{y9p#9koq-lkhYTqoo<+qUE(%rmX+>9 z_W&=UQ$T24V>_ezy#SPvna^)s;=-;{BoXOh*Q@dOTeipzwM0?8}1*$$(pkCS550ZYEWe_5sZY8R}srd3NgV(ofh2s zxLf=(7uYrKe>cv5{q0i?`6W{69lf2pRx_(M&(4*6nhHtJot$v~27~QAKgQ2g`gE6n zW9}%e*M@OYAJg_xXnXr0?=6k%;yjFc)iFx0LB5Gu*n4a~E%d}O(Wv}f_k~aLIRL1) zD>d%`?8Wk&6U~)Dt?oB3kq>v5knox%fGL+}fm_qc;adn z{6y5X?0{Q1%d-L@8hz7J5fjC+CY{ELkX?JK!tHjNe-&bVi+G1PP$POuIz?_EAY1Lj zhi?fztA&X5clTHx6vXfLPdgaeXgSz(<7Z-VqYlUqkNWXy0I9$4%VgZX(;I50!M(LL zjU(n~ucT+4J(d=#5Sc<@Zgl3#$(?kk`oz{kh4riwOi!E~0p^Okw>i;MBj%8@=ea2k zLE0^Hs#|+358s?yJD^B7OC{n{obZN4Mj=J4gpEy6J&HteX6$JCgj6GczD8!ehRkH5 zrec8gFS$gR!Bs+H!YzVnGn$#Oxn*9dz4hJ#;cjTvvis@$RK$B|9W&nAwh2?YQ`9km=>@#N%YCNw#L@&N&XGW!KY!nqf7RzFC>Y;3_C^yWl>+mQ_PGmb z^hzA7oVQ*7)zW!#e(za{M*2%x$YdQi_mBI&_+MxiZ`swcofs~!V$LRg^D*DTi$dH{ zVvrG@egsKxRxk;InL?ImDd-`39? zsSLJ&P!UUPfKC}QO5wi?aH-l5`{TroQbZw-`exH6N~%Ef^{ z9bm9nrLbua?5HlrQ^PD-7~k~jt1~%93H7Y!SG$+x&KCTK-x1NyvAbcNsf@6dCSYL= z-A=D&!chFt@B7`6pQcVJpQ|^d`i++pSIHQcU^kx}7y1|=K2m`W^b=a--S%wtvmfMp zB44g)Qkf|DESl7N1grPfT=U(=*B5tXXs9E_SEo!>JlT4wJ~rHG$BodCqzJ-7jo z7)DH%aI^6NsXi+k`zdj99+gdb$NM&N=uMZ+M1FSzt;KR4@FopkG0}K^o@sqQ!;&7t z8S1e+Sj?Cd(b5?CG66HwWvQKH7hvRG-lLu>7Gjr&Id#jrW46=trft_-h!MMb%z&sw zp|-eCW^HHf=KOZa4sOh7VW2kbM~Xy0V&pAs=0|)-gbJ$4F|km!6AowVj734uMNc%m zn?jiP*CPtBF=y_Ax&%4(NKEgfl{0H2S^99;B?BvH|wfn_Jakt|^u|9m)p4p#2#iflK z_zkMV0K6b-ud}EsK+`;w#SInVYJx5d6vP%|KsM>H#8Jcz8vn(U7F_Smo7Fr=L8NCd zM+REc>8m^N!>%qcGuur)a$Is}qSHUo6s3S3R6Y znudBWF629>q=V}4TQ{%XaP(qHI`tvTy8iC^KIgito@Jx5&^~r z#L>9rM~xs5lU)-1`IfL|a6U(tWB7X(*taaU=ENkQDI z=yA4{`b}Y?Z?YQOT#ifU2U*Uki4Mvu35yf5u=1K`N z-^~K&EbTORrgzwPY}>(f)L-OWHhh+j=k0fc*@@|rAeA^KRUGm?@aQqMBc34>MKoB4 z4;fSUA`2HD0b>!rru6#rsQSZ4T3N#x?MPNt}9cU^V4Ltx}!cdEcnrK;&ijJSI%wg6}V4@xU+0P@zw_)`M!}i zE?z1i6?w%+%$I*nH1`aX)JR~H8?{llp=U|A{gU6}*T_dKFh0|IchBEHkrJnUlPGwM zHj`LjA|_yww_>?9FNU2*LIMSRb*|M7^+-PLGR7~|0v9gDMuyMY!v=$?yCA)H5`?07 zqdZV)vZqoXVyYYmD+_($MC=|~T0Ci+EhH(hN!#6Z7IbwIg+nS_s%Om7+{uU(PvKx8qVt-r~=kfIgi&Ab_B2HT=rgLZi z;03la0?&jvPn4IXqc%2^-C_is9gr5F+!t#9G?_lLSmnrhZ_A@XaCvh8!9`QWtMX)p z5Z=S4c78KIh`#e(aqU;(98J30&ZuEh_bM!A&=1sWn@w;!5f@lsL%sA)99wXPBRjvU z4gx-Rey|H$|BlAXVUgbUPWLxlE{Z?>vVjsY|Ni+TUYQ120h7~E*E8xUxBZxgyxXe} zjwOu35u7>PPMglpdzf9HdpT+17SIYr%|wB85}gqv)A^Fvrp2!jBKCLoA`>Hi44Eu< z86SOzrRf!`8hl7p2K#4#4f#TD^pcb!j%z%N5p@*P@41!bpH9j!0WYee?oa3*e?}3n zXG0nj@lCr9lOgXN=5F7QZEZk!Y?s5dzl@;$78$7|zeoevs>J<%m#n57R(e+y&mTnT zS*YtIkW!+;RXMn+xytRgnTE|^7Dh4guiC#{J|Vc*i%N;u=uQz0+nPw71E?|=iaH<0 zj1hZaclK5qLMy+nX^+V>;_IJ_0P7knxkjdhqopB3_G#xeYwx)`rE=2gm&a>XzX>E) z!HSoqVLF~^l8?J*%0kOkgHDg+cz$-(;v}-&^!6-6_t15!jdKd9zM>#o{p{a)RP$8; z89}RyE8ezW%n!>!3|uAmBQt#>fa#Ts`P9d{W^b&X1t? zl;@bjQy7aMzYEMX7b4p&TSGiiIq9t`e~d(s%KzgX3|_>~FS6PO+^4B-ei-ze^V~cd zP4|xx{!x+r)zW1NeEPQgeysPx!T$}82{PENry*UCb2D5suR}W>Q*L4BA>(_`e z5&vTq|L3C~*bO{8_+;bu7Y9B2fBjptK487qv0ZKxhklMMFd?CAT{!3tQPYFaK=Xal z8!6T8ap?C4-v_wvYU52#9yVBczkTnXIa(Td=sqq}!eCm!{{MbaBtN%U{+$KzUubJ^ zfnLF8RX(0OzDfy}Ty2+$1`e;)$>@5oN0geZx5=Ub2CDLl{S}8{ANx;TYnFrWMf2l* zCB)Y36NhM;e`Bp*+maD(%vAeacO??2iGj`Cg30V^->Bp=79Jj~+#P0<*Ac{xQ+2VY z0NOg6;{93Wfv7_sqNkdH(WJ7?4F0hiLo%F8w2Wnw%~GJ8;JHr8T3LI5kGt|XV2#>s?ngAk zpgA?dP`c`LZxr=`E64MzPkw*b9ci>}0%S0+0(B>Ah1cFy39sEdFJVq=CV^?B3}f|&_rJ@sD4_zQ4+8l zt?-IwM72jp#}Qc)4{*eL-pcpFE+zo}M-}NZ!B)iB@CC;-_8VkYe->BrfngVQH@;8_ zhS;GB^&zz$#Ds>)kI_m<_nqdahGma^UiuIJm$~$h9|LQSgZu)1l-jnL9-O3lA!W1I z$gE@|ZNQJ?2Eyal4EKWgyml<*OvwDyxcM^6Y+jiH~@x^l;f*-z^m>DMx=CMpKx zO*4yJy1j=}d`g6o)zQ?rdWO)lw?S*)W}myFdFocG1RLq5E@-Cg}kY|=E;n_kd5 z#h1PrmDIi0hfgpqnCGd{|7C~)s!ov1ehr{@31Bms;S{h%PX*Dq>Xan+69CsQ<3PJK zHbiyuWEAjyjpnqS^B< z=5?D~sY9IqPIZd`s+$LNMqKt<`EmbZyG1%XE4p>!vXvsf0p?_-yoBO7pU3}d@4KU# z+V*t?DWW1M2rAMpA@l?Y zN$zByeeOADZ*;u(#{2JH#?S#uR+)3Ix#s+SUm4smni{LJ-#o;M4aIgI32X1I6cw6S zWVbj=qWopZl;J!{5J!iP(l|H4$DvtF@j}klVIIl($nhF2eb>x*aq`uf6fY;Y{`VqB zj$&DDDTOU(vr!ehg?4LldU|e+-ew~8WDzUP{g>nD&{~c1l3>yp9&K5T{p_&v72+A6 zQ?Xo;A>-sqTY}@pivhf|$f~V}5}~`jVb5UnsLj`k1L@-(A({%l+aG;AOle`LSHw6O zt)BCkhq!897u|8XVY`D}+r61k$wN0sKHF(o#RS0uy;_%ths9k`%ZvuuA|1e2PNe~w z2$26~ry?TcZNuhPpX21SpIewB0b{>yt4ER-kQ2f|xoQZXcp3LG=`a(<>JqD!ZInP;v*z9iPpN)sOHZ;DpL}zK&-#u8NdfDhATx`5cB6(pElZ7F-l} zi&8btaP%a26aa&J_>Om)jDJY~w+k66aT*S@wLB{@P{+M8W=M{x0!9A>orSI2215{D znBJhHs#c+%;FFcvGm)5AUnug@3 z<$9c9l(MgXqWpK#y zF_Q$3z>Rn3u0=?A><{e#qDd?8vbhH0rb<>sstH4%Q*%v0VUP`IB1^F#?1~G#BtBD} z2YB68P{4`+=7Mc{cicR-7yInMUiZ+swSXZ`@~|*C2m^&dev2PZO}gV5E$k_$LoW;H zOt`IBVIN2->=zjjVhgaHr9JIIy!BUe2QtEmL+Oe;s*`~w;{)tTo0)uVd!(yM+-5<6 zp#3~%>M_-4^-yg4O;(N{ln-x5QRa2}MdPrTI3aF{=y`6P)*;)Oxp(*8+{KTKPH9ww(vgOF5q0 zAf2ls<&q10ngqMh8D|JL?KyKg8J?%5?hr-T-&t8j>y%pz_W%}%GH{bbZUTc_WHvzA zS$c)Y`6hf zhx45}a=I8(_hBy@jMzx`zdz9IXw;NL0V0f_ZwPTh%f6V_esSCRl%cf58y{1n9qD$i z_(gQt(vU)ZLV0gA*@0)WsU9lfnHT?>ehL#IGH{M?icX=t#(k;ewb?Ss!M96&Yy2k) z3QBoJQr^*!C&9G;Rojs?Y(DJ@lEJKs@t}PQWv;32BWMIwz0&7Egj6A}Iu(#f+(Dq2 zpm-S13e_;Dh^aUJCNLZ2x0%obyn;F8=I+*fK?1A(>IQ~Nj2WYLfFhl8)ZK&@5u5AU z(7|X4buTh#m{$_ciO{${om(y^aDDwm6Svy^If4VgUm6)F4nob1_J2WRe|WwZ$#T9U zww!Ixi(sFgp`6g{;LPcVY9&zC@H|v2#oaNDIgF;^ zApl)6T2lcl%ITHtzNuK#wU^H6m8E)0rF7tHMs$@2wqjA2RL7x-Q2hLx(ZTLUp%+LY zbBuR8h-;b6L;xzosyYLt&h1cXeIc!``|)HHcY@I9_e5d8dh>98MGX*ky-eeMV4=cq zQ%{%VH3_nq*m3Tv4REa>qOu6MdVVJfYyuFCR6A*&g~>vI){sp z`jIt>3kv{_AoC;h-c%p)sQD>)+LBXZf0eyy&C_y=$A?K}v+d_E94eOLBN26rO?5g7 z3#7fERx5kz>cUUgC0m=ojT?vAUm4y%) zE3jO65%-LV3)FpVOg2YCuYpv-LRxt>KLsXN+4;^SwnIcb$z~F)(7Jg-C802QN3sIY zFnV~waMacPC-(#$Wrp1H_3BysxH+SAZ%PWO=ud?$~(ih`IZ3;hMuo|Ue z#=l!pJuyC+X!_=)Is-hy>B=+ABZ+}7U20HTDzn<@32XF2N{S=c6#UdY$I6xx?3%?+ zn{DgJ=z4XHIHHqPfsRpydhEdsEw^rKIE5D|ju^-89T^wdfTO+-{ME{=hoAB%d|$e= zpt79&W17iXCFw@yX%AR7a8r}kh`)?ivjCjEl(2ASOzajtkJ1a8Y==!OBGi66T7pKe z#&gE{5s$o0NS1z|76%`Ip`&@G^%A_C)*|z@`GzU3pcdLfb(vWXHy9{ZGI98QotT`< z)=3{mauLDD$ ze&>8zI{Qv=eefinIiYu1-fO5&4lE5MCIdlcRd*67A9 zxeXLIn4s5dXj{WnE$F#8j7uapy?d@ZF{7q4JLZ?3AthSA*g5Tg_3L&~JJMs}m=F0) z>ZE60)8x=km%B8HDCcQ|37_BdU!*+pt^#ZUmT=qi)LkMQfceJDo_lj=xL7FUEc5;Ynctmbi209m zg!}`@o!XTxQ$CKfxc`eA`obZFiu~f-b9d$z$+`Gv zyPB*y#Mb_YHOl?V%!ON%Q!^V=d{y#CE_uPA_IX8OH`jW(d*VyoO?}Wtdt}Y4Ma-su z7c2bP%afEu{ktwQzYCiVkUo4Wej7Hu{uXqq&2Ph|Zy$zDll?YqdIE$^uTlOsY?|y4 zyJh)Z*mMH}7#elI4V!LC1;vO9zYUvqkpYfrLca~0zHqoL^ZQ-cG?+!eQ}_0FVbj!p z;A#zj8#WDk$QL*9+py^uhhfvFe;YPk3ral*hTkTWpFR}LN&haH+*^p`95kI~Uy4QW zFNm$-K%6blXUJbr+s`-$VaX#$svfv)2rXBWqmQUFs&z0AK3B1q!9snW$m6cVsMytN z58PXiXR`~jD(Bc|Npo_Z_LKd+rX)4lc#`_NTk*u@Rf&_-i?Gly8^x9$W*G?1qw0Id zh;60+Ky7*cZ&2Gi=z8}@IJG6`peTuY_t+}Izu!#*XcX{1XH(MdT8nq52a4ht0P-om z-ClB+Ve`!!swAWuweN=QVzUEMO}3pX_4%IwW&@2i$KR3LjitAe4VJ1#{kywZtat;P z-Sxkp4(JkZ{HM%_B=Y}BW;Brghs+4&e+*H8{iu|3)U-cA2cF1}i|jN==XP(_l!;hx z!j~PQZdNh;dnJZd_ojfehf&YN^&T}XEt~S+fi;36FbY~_V4Ya(S(4#+vXimBI;sG2 zFHG~6U)S^2KhX9(*dSMrcPr>Fgw$l%WiA-4P1NbHM24VJytBb|IFoH3@gfBJvopue zSWy?7gbmja!oUivQHY2CZJ{sNGECJmt{XTT#n#riTV(XekD86P#sHp9`=ON%@Yo4> zIdB)^1qTr=sBs_t_&|%*k#aj=t#hI%*DU-iA6R{#{kfI#b?~OvY^dbVcb*!k6rjdO zD98WcoCAbV%o@v47sNvSvElO!$6REeLE9(+s^07&E`A?J!=bK-<-yJReN4|V#Q5+* zC;S*cFw7cF@1AVLn~h1?kR6y>VYmbyz&Y}Q@b2pZ^+Q*KCDQCE zQC5E{hOVJbEYdvZ{=5XGb>6vC8H-AXCA5pfY-IO?)*B!b>Hl@X__?NS{;7kVNYZQ>m}0BR5lz4;$s7L zC9wcP09c&%RmbHYd`7L*y7nM~(8(+Bi8KeP^*~lIh@(Jwjk*yL5ov(^m!cWFz#yI< z^1!9>CosPmbpvJ*wxk2V<(Cu4jmkWLy@rh7%OPsyOK}4TKu$Szv!aZ8V&rtnU$Sc5 zyQoUQ{45YG2niuBdR+PzSSPhQdIO|B+B7BixE@ln2S`rpw*SVt-11@ZM|_# zq9w;N8~9HYe5C+LgDwlZ4;?Bvb|gIbUVO1CYUwqu!;{X7PDSNu6}8rgL+^vU7UT|p zjsb6g7T6hPfIVhfirbkHPhdW*W{(B_3ev8)9eF1hQ#Np{O2bvh0xJHa35!9E`JM#R zuAPa+&t*XSrae+_W!NKKxq@TFZz%BaiW}4^xH0MQ8%Gqveir*cobmZOt zDz~;2Ut=JSBO6C8c&LSaki&e+>H7j#wYrhgmnNzml`E{V3i`P^!_CCXv0N{hPvk@B zFHq^35fem4V6F)$K8GfHJM`Ed%c?fT4e;8{QHc7@9 zz`0^HNROq=*jXUxqbvw|RcX#rtG22Wapeu{G$T5k@_o_M&tkF(!0$V*!B$##0fT@m z0f`Po8gYhEM2X5UjL%qU4;FYH0PK?ZT^iPX3QL(#)VR^|56=E9E74rd%4FbNfBpXB zoq`8b=E82<1|Yj7T%+QpHbYJ_u*Aws){)dM%+KdD0ZOEb&O`xS5TnQgUP;A{4z$pY zC)`GP6uY69a8)YFe2)X^IB(E*if@6x(z7&&d5*+VTr;9;3)j6&RIzb3#mohCL8$>+H)#v~+p(p^JDFd01 zQMv0esiSr#F1LtHdr0ROdsflZEC7=>F(PmZ1CystK28t)JS_+rX6?ovb2l63!2wg} zyYGGJB=SAei)}4218M-6E>pku+?RxP{}uE1qK=OsEBc|Cm5jsu)LRfzHvFuP44VRc zX7<3~SZzhd8hC175`1~xuqC7%3(|6o*eh^-XfW6`6{D_)Q`DnBuYf?GFpz%`H?~_d z7p}ncIUQT$xMG5!d!=Rw5;9C?T?J~Yfm6q!<(5azEwf%08gEr&-&U+jn8cey^pKrp z>`4r_E;~qMa*Bs;=Qh6JMs)4!AKK1^U`l4YquFGE^A^CTN37wg^<>%bB0@(ke+E#Y${0YaEA8UfB7jiv>E##QNNp$A=?q2Ss%Pt(zLbD&Y+8Jn zp7A)Z5_#wqufqeWzIuwDJ7)U3@{kn>F7jQm3IGcOAi!7~txR#ftMMJ>g{o3Q4-80l zbn0F_w2HH-k26Xz>Is*zfS)}5Vdd=~@(_~)AP>oSnBD+=nDOc?HuWBJzSzO~w?dhv zZ)7;gBIdrrkCkunfGajaP6~p@qW-D=*ylXJ%|)uq-vA7P8X<+dEb1M_ac(ez3u=^PwH)SQ_HwRn17h|~9X zXBhXi0%>oslz&p-;aOO|#l(H91aV5JM)BK7A1d=fmY(!}Zy)y37@Wn+1-N+8)>?&y zT&YTH^rY;y;Dl6{;eIsK&{3i-0YDLgOB9Dz;PSPKlFgnu0;Zu;-ID?xwC3@a^PSgi zxj{q12S2PUFb#wqJiL8Nfe~HJaAl6Cx z>>V9Momur&Zan)l{edqSP_M4B-13$Us;~QKJX+_Vel3vRapxna7r$ymY0r~dOm?m9 ziGufcsFL_SLhYIMoBirpr_B5Oh21(&FAY@hgLu7<9>>28NF15wO)XADfC?WJ>DSFK zF9P$&h#Q$ORig2HMRAI!pOr=&kZg!9oyAML%bz;$Q6F;QrUDL>{)O(>`!~?-^&5wu z$5xV=qmm0-JcRFGD@_C94cq>#yB?O4`o|Z5M^K5+AQ)FeePfsM;rvo+w#;$`Y;)y{ zt@fB{l19hb`tGx!QMUb&v`KfQf;{Pj%mJu>@)C>l@*9ln*3T~LBJ?hnJUJ993Y#T} z3#)jpJ!Y|537#fdiR7@CZ1(DKdk+*tqf+%Xm0L;?74TJfp+Tt){UQVH^XQEk7@zQ_ zWxNnC-6$}LEA^afAJoV&>m?_7UJi3ysSy6@g{w&kt*Cn5t@%&U#A?YFhsQqQXOh2M zb&s%tsohK-yfD71YqD%O9F|77??;nRoFmaTJ&F@CiKqCZuHZS@d$uh?I2|J4 zuxr4%JH+Nx^;I)7^B^+QXqB142&px2RRNv9=}^_+^9{3>KW*Iu^xn;NTffpxkXZvB zv^Dg?-B+o&AG8uW{FD2@+a6_AG<47q3kA>>q9jg)knh_A^E?V`e z{&A|D!T}Y7ph|NBg=W%Wd7&J8Ws5@{C<3Fy(AjbvKI3KPh5i4jS&)C#qYX1Re30k9M~n#U^(T%mX~CsH zy(K%3e_#wf@6t&!rylQrH_opNUY}@5SyYEg5U|WDUxF9G7MLKEOiF4B`j;c}T+nmw6 z?TTX4{RGc>p0JJVg9zG;Tph1=qAWkfNFmuM_H-8ni5PuR^h%d@;&>B2Ahp2s;f`|% z$PH3)s3+=5SySv|`I1#0_YxSc-SWYJWylBe3p2j_gAQ^!ZZNpA;LE4HBG1=LjEVYl zHdYGVy4^`vM=B~@w*<)XvXL!bU=3UmqV^h##~!zu@Ps(|qCxH3Y3FR;SNrH2!-^k( zmnO?rS5xbF4f&26&`iuxfckj0hwdJ#W(jR|M%90yPFG79@C0PXM{&WOwWloL+rdKl zne}JWvfM(&xX*Vv%WvTUS0dVqy)HatnQGN@wwuOfH6?!Un1h7hadLFcnZ61usOa80 zSgH@LHBWu>p&_<(V>xQa@9j(~aS<2o;EFjuT(g$?5>eH{!;3e6Bi_moY%%KXxpHQ$ z4IT-jhX6`;@K~*@CLPzAu$Wahr54ojF1^Izt{i-;DPbKI-GrLBKQl9h`)kFdcMRQK zUo26s>JS*YC0-_xNXv!7@}B@?_s~%mORxY2lI(^=q)zsKR81Vx^i>CGY8PzG6lLB~8`!eUDKUj?F=Lq{wp{BPDRKOoF#hf2F!joS17w7rdy^ zVj(^$iW^9nqrvPmR4B08ixl3{6JAJ~S-QSi2oY88x-__&MPcBwZjR5VMU|mY75<=-?Bu-JStTO3M($jqmIP?c;< z1bQ<6$ObzAC*`GnW28GN6oaJHsC$&#-(RPoeo@>aBZ*pj!Dgu5BfB7dRdV0#iz;~r zl~!4fDa`XYlJDv+4a5o*R&+DnSBrf~SfI>v)aq zO{)5Lj(^7+4%?FF5PC{L=@>>$Q3g=1Hh{Ql&3B}uhg`VQilXSG?9g_=7eZbXM?Y}l z3}a3B!ShtD-H0kN;|g7(MgHA4L9sju#~kzTM&E^=i@#+0TG0?>z6w;i&y~FAW1EH( zc4CnSrj?94Os2eC@f^u4K4Tbi_dYSX_@lVGy8%)TzPNWDO$`e?GJbm`+v2bSFyTav zIE0yZw{nYyi*TXnIBq!=!F|+x0W7E>hX^6=jIAfrA-*}# zJ-PbO_y#LAg%2?DRVv9)ywd0U=_s7+gsduy2pjfd$t3}Mo$q`^l+EmXk$4U2^(=4C z>=WBi&8YgL3QnllV|{(tYsxSxy@np%i$nzSLR1a;4=I>u_VpuA(?m*Shi-XO(@l30=p!)waUk(Nj8_x>p42gCv(c4p3MA??V*^WpEfABKnU zj;j_PwUz<@5A4?T50T^8%A_FqKYl*quk~vH#IAP`s-pYFwI1_v0Yp(r{Rt+R+;_kJVXYE0VMy!8bxY)_thZqs0*cvZ4>xq zY3?;!*iD7`SLo(d$UkiAZdrTb?m~ark~Sv*W%&O?P2mI@`o9IJzmz!tw*Yli?H3fF z{&%qayCtvL|LQ^a`N9!)7nNl9 zquXV*ddF_6Ka(Z6)VPf+bl&T!PQ@Az<#O$f$H`vot@z2Cz%Y0_2bKCf{i@ta$t6Ud z<~q-nI{eZCjY=`0)?K75C2ti_FO9pZVR|d5Q`&b-BfhGOv8(DhMmg)iIC)`@+B_O4bChq7xk2}E@ctUylNVCY z9k1UlhIdUk+KPoZqx}2f9e%@vme0XF6qp-CQ zP0}plQuS&hB4b^d=N*kdj;7vmB?dVGl>!uMrdg4lsqm^qmTQ3i6d|a(%rcsIzJRc> z-(@#FYtE^+c$-rvCv`j%$4l4|YD>D~PwO)KJi8jFw>K)Xac4p6lgruRGUJ5HlCC#e zJxvey^<9p;!g+sp)%c@BrD#dNhS17?{x&p%q@m%L^qNXD>I`di1hyfuARV=f_hNr0 z<+;<2k55#D-Fkj6bcM`vdGN7^{j>Hm;ffx6dY;y~4y~)upIlCLE}uDG79;Y=YvIGd zKrhoU6u$}dtxn1xTTZnqFmD62HlkII#@!m88F~Okz8;UgX)1Veuh07xB$mg3v%^#W z+8y@bjgh5fQD^Du9VU=Y!C35J;(oo(b+Lc={QI9Q4lrr9s}Iv;?XYO8i(^GKQMaiW zzFJn!tugjnHr!gtPqIi!_lJQi&OT)~F|pp~Pg%E5RlZQP@ZrZ$Jsii#sa5?&JVv4{ zes@&`N+nWZd3OF=;u}e>$g}CqHQhxcK|PAuXzo?Faky?xh3F}_{FHfB+b58ef%+7V z84k7B48_O*iOX2G_S2kNS+CuDv-9+-+*BnlOEi080OUJmVUOIUuuSB&vE~#g3CQ1d z49-V<8AM5LuHD8a-M8_y*I|NQH3;yRcOpM>%K(-%8pIpyLq#_P3|tj;>h^R_q)y4~ zzaF9+c?+mj@3I|bESbx#SQQ)>+QU^7Czm^`f{edrEEVqLFn%k2a?0z(E$Kx+K3qZ{ z`vcF8TJc^*Pd#)VEcH~2_m|7CuyxIa%a`?4XeXZk( z5U2-Bw7&Iq0fvew*Pxi{We6Y9^u%$=h~E9)hqL2QstT-r0YcA{kH|iIu@!Y@96Kt( zz;AU^6hAzJ4BXDnYBL|K_ZL?~P`$00Kf7!FlX$7mIUo-Pt0b`P#Fx=A#xd#dnGY+u z;)hQ<8P#Mh<*q+6-@83YKPo%WZPXUkmu1vf#`XQABoBk2&8eaq7A(J10lda`3sJc0 zF+q6BuAX8AkLAK?lX>puzQ^tBx>j{3pxuu@$+sJHq|?vY_S*c;;dj;XQxxzNvv9>% zo!lH<^Wh8?v>NA$56rd2=~TEyzAZS!I zQMk}kesH0OGQoI=!yx{Lh;zYe7+xG{yB+*4;`^!5ZRu{?WFD<{iJac>UDRBN>xtBK zB-qft=-C=Al8995iK^uE-j;sz)A_Q-MTUixdhPJ5iZ9I3ZsHge63FPcG665Qzu4MQ z>&C~iD!W#KI7IM1#A7!D)50mq1z`mW2FE9RvPUfP0sDxYg}TTW2b_SyJ;=`td&TQc zaK^jIZ%tYXdrn8GMn<%d2$1iru*dV6_Lpg^Jr;Ev&&3YYBZk5!B3nFOXl7*GX*A9@ z>(#LzAzzd6D=82~M|5ZvwsJ6|qv0wnJKxS4zNFR}XfIpqA5H$Y-dLAeM{&!tE+^)V za?J2Uq{vrjup*#Sw>OYqy7WQPZTm}}Zc-~|`(k9v_yKRN9kNN$^{N+>RC)x*&7_FlfOmZ@@)WFP18aDP4ZXzq&{s%AIO-xT9i_SAXqygKUD z+AZ*)S~k4*QwinKJ1A77RISoXy7Xhp3!>M`F3(k~E7(=O9O-=m{cI-}S30SXtr(`B zl<|a_%xoR@RnyoPc`A>p%h~Iz%9DL*L}7%0GJ9JBN4C4=W*XtMXJi*q5z!j21fvmg zsjHROwV?AQjRP!6&@cTJ#xz=&k}j6gx9zLyX6|?Jx(taJ2{7)Z^dk`E-w={-R#+85 zsFW;o7#HNT1|9CpSO}8GSUwY4x@c4#Z)gw{QUVuZ;g@uO8cDRGdp7Uq!Snv%vQn#U zAL)`=MF80=*iWYjZ83d`om_QD>BjW#)JY4P>8Cw?-nVzsH#TSA!5X0yFO zCwW#iG@Xi_AK@u0XFusBWLvhxrW)`m#SZJ!*JXoOPI~ z&6HTV_o-EwWrgQPi{yoa(3GJzrk&ANqE5p?M&RM<@1B|b)IfYR-Ez#?$3>t7=DDhc zyqa-omg;72(e%4Iq68{q|*Y<`BBM}yz$jLmm($Sw;`vKDkEB1f%1wLw>NQi8@XeuGv8}- z%@jRcSq{ge<6^xEa7732SZrsdy_T30dI*y~%op;r<9KNTyV*WRA;#LGhL?-+As zIXb6r#%39Y3RmAZ-P&+(Wr8A;-Jtt#87JU~6!e(kV(|O$?jV{gADCcqP_40uCq?>^ zn*5Y`IZrPZs5|I2t>srVC7y~qoR|z1p7F`v%Qk%j8x&>3&rOCvs-L$IzqUT$&<^d&_mT z^pn<~un>aFgDdm$i}aeWAAGQ6WlzT~rvbacd#*jy*%lO^p)*I-Yy$YyF`?w2!oStL`#B za!`7#oN+h)_E(0JbG8FP2*V7+KvOzKVrF4GlX%zNZ97cyoOz1K?d;8W6t6=Qg$;)! zZvJOO{iErSE<}y^hnHGf=hr4a^WjzyMYQQ97`1^wMcusgW`$2~`#)qh1C}oAFzt02 zHldlBz4(|fvF)c1Oop7t%pPp^NDwMt810;5dpz)?BJNIB=7~=&Mup+W30ZUB6ef#m z7w`_yzPA>_JCPi-{NlF}(kgiD5KW%O%b;_SW%y~W&{;)qHqy=%{K57!>biUmyM`1& zOa0{6LK^jARm)ZUR?Cg9&MGVnk<}79?@xz5k`JewpZi+TcMfyjM%BT&NI8h7o~4JA zy65@`t@4DlN~xS4@QBg~mL25H_)Pt)}w+<2yPj zEy*r7z8f`dBZ#BvXsNS{P^n{|7} z%T>f_&YjQ^Bz&z~1eEMxdA9;YEIXD3is+N*BQ1p}u@YVr^nq?dSvL!{V^W;Q2$6Zo zj7}<5n;)lz&PQHRX#cp(Kr(kb=E$zN>%+a}jQnC5yb?b;`gULZg@c>+h=%T-UPzs`iM$};{Cu5y3>rM(IO(E99g5*`tdQeL_b_l@%FSROu960=L)a8xA|`M zmjGg;Ns5ODnuY1- zJ#5tx)u{H(AGKl*?b1tz3e0hI=a(GQnr?3`baf+UdW20q*vAj*L%6FrucsE~+XC4Z2-8K}s-a^0{z&LbFnk`j z)16?Cj$3NhwN?*5$DuUa2b*H}w#Ho%`jev{DH|psa#c-YY1d)3WGzFOUZ3vqJjLso z)z@|!Q`+|Q^Q#*iBQz-=7Wceyn2z*ywieF|v61wIgGPT%_q3+9rEqeuC|J==_In{` zeCyDcm7YgA|2TN3=9^IO`nY-y>zuhoZ=HOT)e~&SCGr**7h4I(1^ly94qmy&?>5ukzUKDAaVgZ?Zh0`1e=K zfA^)gR360%bma)mfc`Iz|KKb&vdH`CwIlC?h`4~AWe8!Rt^)nl@#Un7VZ<+@e{peT z@qj&^Cw_bNSI7UKee*t9emE;DtLPgA=l*SB@UPwjtr~oipT@>UWC&qq$f*7w|MPD% z^ZNp+fkynI`33Mb`Xo2_7rlYJM|{@QYg8VTd1eMi;gzStYkoECZdHI^OE+B}{DzB}rsRR0u`U=!7u(@K+ zXZtLYS9ABhVDf}RvE*XAHv#x@3^X)IS-<2m6c-_S0lW8$&U7;NIWep7K(c?Q=|2Ac zI8BF1KPD=faW2**##mh@4kmd?mu#;i#5>2J7k1Hj;pl-Qb@OD@?d_*ADSV3tg|RK~ z(fH=8L(KHN3>t!5#Ds?=yLbE~f6+hBf}XmCMQ!YuxHNMO8Cdl`A2lfh&p&WQbwRrtSpK?<05peq|>`elDLRjPxzE;+>GSBEDB xGqLU@tNE2*v{eoV%aDP3i^MMu-%CtPAScj0uQYZ|#vKD+igIeQh4-HM{ukv}#}5Di literal 0 HcmV?d00001 From 5000de55bf3801125917cfaa66b3d9851a1ea0ad Mon Sep 17 00:00:00 2001 From: Andrew Tsao Date: Wed, 6 Sep 2023 17:37:41 -0400 Subject: [PATCH 068/103] Update extended attributes Add an additional bullet to clarify that extended attributes only support top-level keys / fields. This will prevent confusion in case users try to pass in partial or individual sub-key values (instead of passing in the entire top-level key as a nested JSON block in the YAML). The latter will work while the former will cause an error when you actually try to run dbt. --- website/snippets/_cloud-environments-info.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index 9311dc25139..aed2a6316bb 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -65,6 +65,8 @@ If you're developing in the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in - If the attribute doesn't exist, it will add the attribute or value pair to the profile. +- Only **top-level** keys are accepted in extended attributes. In other words, if you wish to update a particular *sub-key value*, you will need to update the entire top-level key as a JSON block in your resulting YAML. For example, assuming you wish to pass in a custom override for a specific individual [service account JSON field](/docs/core/connect-data-platform/bigquery-setup#service-account-json) for your BigQuery connection (e.g. `project_id` or `client_email`), you will need to pass in an override for the entire top-level `keyfile_json` key / attribute instead via extended attributes (with the sub-fields passed in as a nested JSON block). + The following code is an example of the types of attributes you can add in the **Extended Attributes** text box: ```yaml From a5da59087fa536f244e24c99e436296704d73c0e Mon Sep 17 00:00:00 2001 From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com> Date: Wed, 6 Sep 2023 15:04:48 -0700 Subject: [PATCH 069/103] Simplify the templates (#4031) ## What are you changing in this pull request and why? * Reorder the issue templates to make more sense. * Removing unused templates ## Checklist - [ ] Review the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) and [About versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) so my content adheres to these guidelines. - [ ] Add a checklist item for anything that needs to happen before this PR is merged, such as "needs technical review" or "change base branch." Adding new pages (delete if not applicable): - [ ] Add page to `website/sidebars.js` - [ ] Provide a unique filename for the new page Removing or renaming existing pages (delete if not applicable): - [ ] Remove page from `website/sidebars.js` - [ ] Add an entry `website/static/_redirects` - [ ] [Ran link testing](https://github.com/dbt-labs/docs.getdbt.com#running-the-cypress-tests-locally) to update the links that point to the deleted page --------- Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- .../{improve-docs.yml => a-improve-docs.yml} | 2 +- .github/ISSUE_TEMPLATE/config.yml | 3 -- .github/ISSUE_TEMPLATE/improve-the-site.yml | 8 ++--- .github/ISSUE_TEMPLATE/new-dbt-feature.yml | 33 ------------------- 4 files changed, 5 insertions(+), 41 deletions(-) rename .github/ISSUE_TEMPLATE/{improve-docs.yml => a-improve-docs.yml} (98%) delete mode 100644 .github/ISSUE_TEMPLATE/new-dbt-feature.yml diff --git a/.github/ISSUE_TEMPLATE/improve-docs.yml b/.github/ISSUE_TEMPLATE/a-improve-docs.yml similarity index 98% rename from .github/ISSUE_TEMPLATE/improve-docs.yml rename to .github/ISSUE_TEMPLATE/a-improve-docs.yml index 57dc64cc312..70b173e49a4 100644 --- a/.github/ISSUE_TEMPLATE/improve-docs.yml +++ b/.github/ISSUE_TEMPLATE/a-improve-docs.yml @@ -39,4 +39,4 @@ body: label: Additional information description: Add any other context or screenshots about the feature request here. validations: - required: false \ No newline at end of file + required: false diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml index 9349000f66b..f3a3521bdec 100644 --- a/.github/ISSUE_TEMPLATE/config.yml +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -1,8 +1,5 @@ blank_issues_enabled: true contact_links: - - name: Want to see new content? Open a discussion! - url: https://github.com/dbt-labs/docs.getdbt.com/discussions/new - about: You can open a discussion to propose new content for the dbt product documentation. - name: Have questions about dbt? Join the Community! url: https://www.getdbt.com/community/join-the-community about: You can join the dbt Labs Community to ask and answer questions. diff --git a/.github/ISSUE_TEMPLATE/improve-the-site.yml b/.github/ISSUE_TEMPLATE/improve-the-site.yml index e0556d7374f..dd585324f89 100644 --- a/.github/ISSUE_TEMPLATE/improve-the-site.yml +++ b/.github/ISSUE_TEMPLATE/improve-the-site.yml @@ -1,6 +1,6 @@ -name: Improve the docs.getdbt.com site -description: Make a suggestion or report a problem about the technical implementation of docs.getdbt.com. -labels: ["engineering"] +name: Report a docs.getdbt.com site issue +description: Report a problem about the technical implementation of docs.getdbt.com. +labels: ["engineering","bug"] body: - type: markdown attributes: @@ -39,4 +39,4 @@ body: label: Additional information description: Any additional information, configuration, or data that might be necessary to reproduce the issue. validations: - required: false \ No newline at end of file + required: false diff --git a/.github/ISSUE_TEMPLATE/new-dbt-feature.yml b/.github/ISSUE_TEMPLATE/new-dbt-feature.yml deleted file mode 100644 index fa46a189fc4..00000000000 --- a/.github/ISSUE_TEMPLATE/new-dbt-feature.yml +++ /dev/null @@ -1,33 +0,0 @@ -name: Start docs project for a new feature -description: For dbt PMs to add docs for their new or updated dbt product features. -labels: ["content","upcoming release"] -body: - - type: markdown - attributes: - value: | - * Before you file an issue read the [Contributing guide](https://github.com/dbt-labs/docs.getdbt.com#contributing). - * Check to make sure someone hasn't already opened a similar [issue](https://github.com/dbt-labs/docs.getdbt.com/issues). - - - type: checkboxes - id: contributions - attributes: - label: Contributions - description: This applies to new, unreleased content. - options: - - label: I am a PM or subject matter expert at dbt who is responsible for this feature. - - - type: textarea - attributes: - label: Where does this content belong? - description: | - - Give as much detail as you can to help us understand where you expect the content to live. - validations: - required: true - - - type: textarea - attributes: - label: Link to source material - description: | - Use the [source material template](https://docs.google.com/document/d/1lLWGMXJFjkY4p7r8ZKhBX73dOLmIjgXZBYq39LqmAJs/edit) to provide source material for this feature. - validations: - required: true \ No newline at end of file From 5b1b5baf7eba7e3e367626255a253c7bb3bde166 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 11:48:31 +0100 Subject: [PATCH 070/103] Update audit-log.md fix image sizes --- website/docs/docs/cloud/manage-access/audit-log.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/cloud/manage-access/audit-log.md b/website/docs/docs/cloud/manage-access/audit-log.md index 818ec553e7b..434a67dc1fb 100644 --- a/website/docs/docs/cloud/manage-access/audit-log.md +++ b/website/docs/docs/cloud/manage-access/audit-log.md @@ -20,7 +20,7 @@ To access audit log, click the gear icon in the top right, then click **Audit Lo
- +
@@ -163,7 +163,7 @@ You can search the audit log to find a specific event or actor, which is limited
- +
@@ -174,6 +174,6 @@ You can use the audit log to export all historical audit results for security, c - For events within 90 days — dbt Cloud will automatically display the 90 days selectable date range. Select **Export Selection** to download a CSV file of all the events that occurred in your organization within 90 days. - For events beyond 90 days — Select **Export All**. The Account Admin will receive an email link to download a CSV file of all the events that occurred in your organization. - + From 3bd04934c86d271bdb31faeb6b8dab80aa96cbb2 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 11:59:39 +0100 Subject: [PATCH 071/103] Update audit-log.md --- website/docs/docs/cloud/manage-access/audit-log.md | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/website/docs/docs/cloud/manage-access/audit-log.md b/website/docs/docs/cloud/manage-access/audit-log.md index 434a67dc1fb..80ee7f3d116 100644 --- a/website/docs/docs/cloud/manage-access/audit-log.md +++ b/website/docs/docs/cloud/manage-access/audit-log.md @@ -16,14 +16,10 @@ The dbt Cloud audit log stores all the events that occurred in your organization ## Accessing the audit log -To access audit log, click the gear icon in the top right, then click **Audit Log**. - -
+To access the audit log, click the gear icon in the top right, then click **Audit Log**. -
- ## Understanding the audit log On the audit log page, you will see a list of various events and their associated event data. Each of these events show the following information in dbt: @@ -161,17 +157,15 @@ The audit log supports various events for different objects in dbt Cloud. You wi You can search the audit log to find a specific event or actor, which is limited to the ones listed in [Events in audit log](#events-in-audit-log). The audit log successfully lists historical events spanning the last 90 days. You can search for an actor or event using the search bar, and then narrow your results using the time window. -
-
## Exporting logs You can use the audit log to export all historical audit results for security, compliance, and analysis purposes: -- For events within 90 days — dbt Cloud will automatically display the 90 days selectable date range. Select **Export Selection** to download a CSV file of all the events that occurred in your organization within 90 days. +- For events within 90 days — dbt Cloud will automatically display the 90-day selectable date range. Select **Export Selection** to download a CSV file of all the events that occurred in your organization within 90 days. - For events beyond 90 days — Select **Export All**. The Account Admin will receive an email link to download a CSV file of all the events that occurred in your organization. From 2b49cfa35c2eccd1cdde8e9efb23e4d6ed287ed6 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Thu, 7 Sep 2023 13:21:37 +0100 Subject: [PATCH 072/103] add metric offset --- website/docs/docs/build/derived-metrics.md | 109 ++++++++++++++++++--- 1 file changed, 95 insertions(+), 14 deletions(-) diff --git a/website/docs/docs/build/derived-metrics.md b/website/docs/docs/build/derived-metrics.md index 375794cc5c8..0c5ba3778a1 100644 --- a/website/docs/docs/build/derived-metrics.md +++ b/website/docs/docs/build/derived-metrics.md @@ -6,7 +6,7 @@ sidebar_label: Derived tags: [Metrics, Semantic Layer] --- -In MetricFlow, derived metrics are metrics created by defining an expression using other metrics. They allow performing calculations on top of existing metrics. This proves useful for combining metrics and applying arithmetic functions to aggregated columns, such as, you can define a profit metric. +In MetricFlow, derived metrics are metrics created by defining an expression using other metrics. They enable you to perform calculations with existing metrics. This is helpful for combining metrics and doing math functions on aggregated columns, like creating a profit metric. The parameters, description, and type for derived metrics are: @@ -21,7 +21,8 @@ In MetricFlow, derived metrics are metrics created by defining an expression usi | `metrics` | The list of metrics used in the derived metrics. | Required | | `alias` | Optional alias for the metric that you can use in the expr. | Optional | | `filter` | Optional filter to apply to the metric. | Optional | -| `offset_window` | Set the period for the offset window, such as 1 month. This will return the value of the metric one month from the metric time. | Required | +| `offset_window` | Set the period for the offset window, such as 1 month. This will return the value of the metric one month from the metric time. This can't be used with `offset_to_grain`. | Required | +| `offset_to_grain` | Specifies the granularity or level of detail for the offset, such as a day. This means if you set your `offset_to_grain: day`, the offset is applied at the daily level. If you set it to "hour," it means the offset is at the hourly level. This can't be used with `offset_window`. | Required | The following displays the complete specification for derived metrics, along with an example. @@ -37,7 +38,7 @@ metrics: - name: the name of the metrics. must reference a metric you have already defined # Required alias: optional alias for the metric that you can use in the expr # Optional filter: optional filter to apply to the metric # Optional - offset_window: set the period for the offset window i.e 1 month. This will return the value of the metric one month from the metric time. # Required + offset_window: set the period for the offset window, such as 1 month. This will return the value of the metric one month from the metric time. # Required ``` ## Derived metrics example @@ -85,17 +86,97 @@ metrics: ## Derived metric offset -You may want to use an offset value of a metric in the definition of a derived metric. For example, you can model the retention rate by using a derived metric with an offset, which involves calculating (active customers at the end of the month/active customers at the beginning of the month). +To perform calculations using a metric's value from a previous time period, you can add an offset parameter to a derived metric. For example, if you want to calculate period-over-period growth or track user retention, you can use this metric offset. + +**Note:** You must include the [`metric_time` dimension](/docs/build/dimensions#time) when querying a derived metric with an offset window. + +The following example displays how you can calculate monthly revenue growth using a 1-month offset window: ```yaml -metrics: -- name: user_retention - type: derived - type_params: - expr: active_customers/active_customers_t1m - metrics: - - name: active_customers # these are all metrics (can be a derived metric, meaning building a derived metric with derived metrics) - - name: active_customers - offset_window: 1 month - alias: active_customers_t1m +- name: customer_retention + description: "Percentage of customers that are active now and those active 1 month ago" + label: customer_retention + type_params: + expr: (active_customers/ active_customers_prev_month) + metrics: + - name: active_customers + alias: current_active_customers + - name: active_customers + offset_window: 1 month + alias: active_customers_prev_month +``` + +### Offset windows and granularity + +You can query any granularity and offset window combination. The following examples queries a metric with a 7-day offset and a monthly grain: + +```yaml +- name: d7_booking_change + description: "Difference between bookings now and 7 days ago" + type: derived + label: d7 Bookings Change + type_params: + expr: bookings - bookings_7_days_ago + metrics: + - name: bookings + alias: current_bookings + - name: bookings + offset_window: 7 days + alias: bookings_7_days_ago + +``` + +**Using `offset_to_grain`** + +You can set an `offset_to_grain` to specify the granularity or level of detail for the offset, such as a day or hour. Something to note is that you can't use this with `offset_window`. The following example queries a metric with an hourly offset and a monthly grain: + +```yaml +- name: d7_booking_change + description: "Calculate bookings per hour for the current month" + type: derived + label: d7 Bookings Change + type_params: + expr: bookings * bookings_hourly + metrics: + - name: bookings + alias: current_bookings + - name: bookings + offset_to_grain: hour + alias: bookings_hourly +``` + +### Derived metric offset calculation + +When you run the query `mf query --metrics d7_booking_change --group-by metric_time__month` for the metric, here's how it's calculated: + +1. We retrieve the raw, unaggregated dataset with the specified measures and dimensions at the smallest level of detail, which is currently 'day'. +2. Then, we perform an offset join on the daily dataset, followed by performing a date trunc and aggregation to the requested granularity. + For example, to calculate `d7_booking_change` for July 2017: + - First we sum up all the booking values for each day in July to calculate the bookings metric. + - The following table displays the range of days that make up this monthly aggregation. + +| | Orders | Metric_time | +| - | ---- | -------- | +| | 330 | 2017-07-31 | +| | 7030 | 2017-07-30 to 2017-07-02 | +| | 78 | 2017-07-01 | +| Total | 7438 | 2017-07-01 | + +3. Next, we calculate July's bookings with a 7-day offset. The following table displays the range of days that make up this monthly aggregation. Note that the month begins 7 days later (offset by 7 days) on 2017-07-24. + +| | Orders | Metric_time | +| - | ---- | -------- | +| | 329 | 2017-07-24 | +| | 6840 | 2017-07-23 to 2017-06-35 | +| | 83 | 2017-06-24 | +| Total | 7252 | 2017-07-01 | + +4. Lastly, we calculate the derived metric and return the final result set: + +```bash +bookings - bookings_7_days_ago would be compile as 7438 - 7252 = 186. ``` + +| d7_booking_change | metric_time__month | +| ----------------- | ------------------ | +| 186 | 2017-07-01 | From 0267590e19f006dbb22ce22135cc297c2b5c977c Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 13:23:17 +0100 Subject: [PATCH 073/103] Update website/docs/docs/cloud/manage-access/audit-log.md --- website/docs/docs/cloud/manage-access/audit-log.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/manage-access/audit-log.md b/website/docs/docs/cloud/manage-access/audit-log.md index 80ee7f3d116..c2f19274097 100644 --- a/website/docs/docs/cloud/manage-access/audit-log.md +++ b/website/docs/docs/cloud/manage-access/audit-log.md @@ -18,7 +18,7 @@ The dbt Cloud audit log stores all the events that occurred in your organization To access the audit log, click the gear icon in the top right, then click **Audit Log**. - + ## Understanding the audit log From 3616038281cded73a4b4cfb70fe2fff3ab6ad08b Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 13:26:49 +0100 Subject: [PATCH 074/103] Update derived-metrics.md --- website/docs/docs/build/derived-metrics.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/build/derived-metrics.md b/website/docs/docs/build/derived-metrics.md index 0c5ba3778a1..f255f71b07c 100644 --- a/website/docs/docs/build/derived-metrics.md +++ b/website/docs/docs/build/derived-metrics.md @@ -108,7 +108,7 @@ The following example displays how you can calculate monthly revenue growth usin ### Offset windows and granularity -You can query any granularity and offset window combination. The following examples queries a metric with a 7-day offset and a monthly grain: +You can query any granularity and offset window combination. The following example queries a metric with a 7-day offset and a monthly grain: ```yaml - name: d7_booking_change @@ -152,7 +152,7 @@ When you run the query `mf query --metrics d7_booking_change --group-by metric_ 1. We retrieve the raw, unaggregated dataset with the specified measures and dimensions at the smallest level of detail, which is currently 'day'. 2. Then, we perform an offset join on the daily dataset, followed by performing a date trunc and aggregation to the requested granularity. For example, to calculate `d7_booking_change` for July 2017: - - First we sum up all the booking values for each day in July to calculate the bookings metric. + - First, we sum up all the booking values for each day in July to calculate the bookings metric. - The following table displays the range of days that make up this monthly aggregation. | | Orders | Metric_time | From c8be2b0c01ee6ba7f4e673e75766e2d57293e17b Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 13:37:38 +0100 Subject: [PATCH 075/103] Update audit-log.md --- website/docs/docs/cloud/manage-access/audit-log.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/manage-access/audit-log.md b/website/docs/docs/cloud/manage-access/audit-log.md index c2f19274097..bebc68306c4 100644 --- a/website/docs/docs/cloud/manage-access/audit-log.md +++ b/website/docs/docs/cloud/manage-access/audit-log.md @@ -18,7 +18,7 @@ The dbt Cloud audit log stores all the events that occurred in your organization To access the audit log, click the gear icon in the top right, then click **Audit Log**. - + ## Understanding the audit log From ec3ed16cf888a6282fae978dd73f3f03cdce6233 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 13:42:15 +0100 Subject: [PATCH 076/103] Update website/snippets/_cloud-environments-info.md --- website/snippets/_cloud-environments-info.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index aed2a6316bb..26ef01248f8 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -65,7 +65,7 @@ If you're developing in the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in - If the attribute doesn't exist, it will add the attribute or value pair to the profile. -- Only **top-level** keys are accepted in extended attributes. In other words, if you wish to update a particular *sub-key value*, you will need to update the entire top-level key as a JSON block in your resulting YAML. For example, assuming you wish to pass in a custom override for a specific individual [service account JSON field](/docs/core/connect-data-platform/bigquery-setup#service-account-json) for your BigQuery connection (e.g. `project_id` or `client_email`), you will need to pass in an override for the entire top-level `keyfile_json` key / attribute instead via extended attributes (with the sub-fields passed in as a nested JSON block). +Only the **top-level keys** are accepted in extended attributes. This means that if you want to change a specific sub-key value, you must provide the entire top-level key as a JSON block in your resulting YAML. For example, if you want to customize a particular field within a [service account JSON](/docs/core/connect-data-platform/bigquery-setup#service-account-json) for your BigQuery connection (like 'project_id' or 'client_email'), you need to provide an override for the entire top-level 'keyfile_json' main key/attribute using extended attributes. Include the sub-fields as a nested JSON block. The following code is an example of the types of attributes you can add in the **Extended Attributes** text box: From dcd73f57c3f0b64a80b3bdebf909e4d83f779128 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 13:42:44 +0100 Subject: [PATCH 077/103] Update website/snippets/_cloud-environments-info.md --- website/snippets/_cloud-environments-info.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index 26ef01248f8..5388379dc34 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -65,7 +65,7 @@ If you're developing in the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in - If the attribute doesn't exist, it will add the attribute or value pair to the profile. -Only the **top-level keys** are accepted in extended attributes. This means that if you want to change a specific sub-key value, you must provide the entire top-level key as a JSON block in your resulting YAML. For example, if you want to customize a particular field within a [service account JSON](/docs/core/connect-data-platform/bigquery-setup#service-account-json) for your BigQuery connection (like 'project_id' or 'client_email'), you need to provide an override for the entire top-level 'keyfile_json' main key/attribute using extended attributes. Include the sub-fields as a nested JSON block. +Only the **top-level keys** are accepted in extended attributes. This means that if you want to change a specific sub-key value, you must provide the entire top-level key as a JSON block in your resulting YAML. For example, if you want to customize a particular field within a [service account JSON](/docs/core/connect-data-platform/bigquery-setup#service-account-json) for your BigQuery connection (like 'project_id' or 'client_email'), you need to provide an override for the entire top-level `keyfile_json` main key/attribute using extended attributes. Include the sub-fields as a nested JSON block. The following code is an example of the types of attributes you can add in the **Extended Attributes** text box: From dae162a96249802fbd80489a860cc1012de705cb Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 13:45:25 +0100 Subject: [PATCH 078/103] Update website/docs/docs/cloud/manage-access/audit-log.md --- website/docs/docs/cloud/manage-access/audit-log.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/manage-access/audit-log.md b/website/docs/docs/cloud/manage-access/audit-log.md index bebc68306c4..98bf660b259 100644 --- a/website/docs/docs/cloud/manage-access/audit-log.md +++ b/website/docs/docs/cloud/manage-access/audit-log.md @@ -18,7 +18,7 @@ The dbt Cloud audit log stores all the events that occurred in your organization To access the audit log, click the gear icon in the top right, then click **Audit Log**. - + ## Understanding the audit log From 46f2d4a830c09bf5b34478cc30d7b5fd2b3b426c Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 13:52:08 +0100 Subject: [PATCH 079/103] Update cloud-seats-and-users.md users are confused and asking for more clarify on IT licenses and job notifications. this pr clarifies that IT users can't set job notifications, only receive them. eng and product determining whether this is something that will change in the future. until then, this pr confirms the above. --- .../docs/docs/cloud/manage-access/cloud-seats-and-users.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md index 6b68d440ba3..4589619f8ff 100644 --- a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md +++ b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md @@ -21,8 +21,8 @@ The user's assigned license determines the specific capabilities they can access | API Access | βœ… | ❌ | ❌ | | Use [Source Freshness](/docs/deploy/source-freshness) | βœ… | βœ… | ❌ | | Use [Docs](/docs/collaborate/build-and-view-your-docs) | βœ… | βœ… | ❌ | -| Receive [Job notifications](/docs/deploy/job-notifications) | βœ… | βœ… | βœ… | -*Available on Enterprise and Team plans only and doesn't count toward seat usage. Please note, IT seats are limited to 1 seat per Team or Enterprise account. +| Receive [Job notifications](/docs/deploy/job-notifications) | βœ… | βœ… | βœ… Note, that IT users set job notifications.| +*Available on Enterprise and Team plans only and doesn't count toward seat usage. Please note, that IT seats are limited to 1 seat per Team or Enterprise account. ## Licenses From 63d3a145f4d3e6f4f3c108fbd015b84ac95f756a Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 13:55:06 +0100 Subject: [PATCH 080/103] Update job-notifications.md --- website/docs/docs/deploy/job-notifications.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/deploy/job-notifications.md b/website/docs/docs/deploy/job-notifications.md index 72725a1e460..8d242abac78 100644 --- a/website/docs/docs/deploy/job-notifications.md +++ b/website/docs/docs/deploy/job-notifications.md @@ -9,10 +9,10 @@ Setting up notifications in dbt Cloud will allow you to receive alerts via Email ### Email -These are the following options for setting up email notifications: +These are the following options for setting up email notifications. Refer to [Users and licenses](/docs/cloud/manage-access/seats-and-users) for info on license types eligible for email notifications. -- As a **user** — You can set up email notifications for yourself under your Profile. -- As an **admin** — You can set up notifications on behalf of your team members. Refer to [Users and licenses](/docs/cloud/manage-access/seats-and-users) for info on license types eligible for email notifications. +- As a **user** — You can set up email notifications for yourself under your Profile. +- As an **admin** — You can set up notifications on behalf of your team members. To set up job notifications, follow these steps: From 59689fd84dd5b775f1acead720cfd61d84a4e51b Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 13:56:31 +0100 Subject: [PATCH 081/103] Update cloud-seats-and-users.md --- website/docs/docs/cloud/manage-access/cloud-seats-and-users.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md index 4589619f8ff..67e05964e89 100644 --- a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md +++ b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md @@ -21,7 +21,7 @@ The user's assigned license determines the specific capabilities they can access | API Access | βœ… | ❌ | ❌ | | Use [Source Freshness](/docs/deploy/source-freshness) | βœ… | βœ… | ❌ | | Use [Docs](/docs/collaborate/build-and-view-your-docs) | βœ… | βœ… | ❌ | -| Receive [Job notifications](/docs/deploy/job-notifications) | βœ… | βœ… | βœ… Note, that IT users set job notifications.| +| Receive [Job notifications](/docs/deploy/job-notifications) | βœ… | βœ… Note, that Read-Only users can't set job notifications. | βœ… Note, that IT users can't set job notifications.| *Available on Enterprise and Team plans only and doesn't count toward seat usage. Please note, that IT seats are limited to 1 seat per Team or Enterprise account. ## Licenses From 33616bc149ece46a9085d8ae932faca98e6d3c6f Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 15:25:30 +0100 Subject: [PATCH 082/103] Update cloud-seats-and-users.md --- website/docs/docs/cloud/manage-access/cloud-seats-and-users.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md index 67e05964e89..d35837e6890 100644 --- a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md +++ b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md @@ -21,7 +21,7 @@ The user's assigned license determines the specific capabilities they can access | API Access | βœ… | ❌ | ❌ | | Use [Source Freshness](/docs/deploy/source-freshness) | βœ… | βœ… | ❌ | | Use [Docs](/docs/collaborate/build-and-view-your-docs) | βœ… | βœ… | ❌ | -| Receive [Job notifications](/docs/deploy/job-notifications) | βœ… | βœ… Note, that Read-Only users can't set job notifications. | βœ… Note, that IT users can't set job notifications.| +| Receive [Job notifications](/docs/deploy/job-notifications) | βœ… | βœ…
Can't set job notifications. | βœ…
Can't set job notifications.| *Available on Enterprise and Team plans only and doesn't count toward seat usage. Please note, that IT seats are limited to 1 seat per Team or Enterprise account. ## Licenses From a6ec7fdd459b37069284438639b749b0eef5ed43 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 16:39:27 +0100 Subject: [PATCH 083/103] Update cloud-seats-and-users.md clarify it/readonly can't set job notifications --- .../docs/docs/cloud/manage-access/cloud-seats-and-users.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md index d35837e6890..3e3b829cfbf 100644 --- a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md +++ b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md @@ -8,8 +8,8 @@ sidebar: "Users and licenses" In dbt Cloud, _licenses_ are used to allocate users to your account. There are three different types of licenses in dbt Cloud: - **Developer** — Granted access to the Deployment and [Development](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) functionality in dbt Cloud. -- **Read-Only** — Intended to view the [artifacts](/docs/deploy/artifacts) created in a dbt Cloud account. -- **IT** — Can manage users, groups, and licenses, among other permissions. Available on Enterprise and Team plans only. +- **Read-Only** — Intended to view the [artifacts](/docs/deploy/artifacts) created in a dbt Cloud account, can receive job notifications but not configure them. +- **IT** — Can manage users, groups, and licenses, among other permissions. Can receive job notifications but not configure them. Available on Enterprise and Team plans only. The user's assigned license determines the specific capabilities they can access in dbt Cloud. @@ -21,7 +21,7 @@ The user's assigned license determines the specific capabilities they can access | API Access | βœ… | ❌ | ❌ | | Use [Source Freshness](/docs/deploy/source-freshness) | βœ… | βœ… | ❌ | | Use [Docs](/docs/collaborate/build-and-view-your-docs) | βœ… | βœ… | ❌ | -| Receive [Job notifications](/docs/deploy/job-notifications) | βœ… | βœ…
Can't set job notifications. | βœ…
Can't set job notifications.| +| Receive [Job notifications](/docs/deploy/job-notifications) | βœ… | βœ… | βœ… | *Available on Enterprise and Team plans only and doesn't count toward seat usage. Please note, that IT seats are limited to 1 seat per Team or Enterprise account. ## Licenses From 5a14b28bc6dd5852ccb29cd895a618a19e5b1c0f Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 16:47:31 +0100 Subject: [PATCH 084/103] Update derived-metrics.md removed offset_to_grain --- website/docs/docs/build/derived-metrics.md | 24 +--------------------- 1 file changed, 1 insertion(+), 23 deletions(-) diff --git a/website/docs/docs/build/derived-metrics.md b/website/docs/docs/build/derived-metrics.md index f255f71b07c..703c5ecf756 100644 --- a/website/docs/docs/build/derived-metrics.md +++ b/website/docs/docs/build/derived-metrics.md @@ -22,7 +22,6 @@ In MetricFlow, derived metrics are metrics created by defining an expression usi | `alias` | Optional alias for the metric that you can use in the expr. | Optional | | `filter` | Optional filter to apply to the metric. | Optional | | `offset_window` | Set the period for the offset window, such as 1 month. This will return the value of the metric one month from the metric time. This can't be used with `offset_to_grain`. | Required | -| `offset_to_grain` | Specifies the granularity or level of detail for the offset, such as a day. This means if you set your `offset_to_grain: day`, the offset is applied at the daily level. If you set it to "hour," it means the offset is at the hourly level. This can't be used with `offset_window`. | Required | The following displays the complete specification for derived metrics, along with an example. @@ -126,27 +125,6 @@ You can query any granularity and offset window combination. The following examp ``` -**Using `offset_to_grain`** - -You can set an `offset_to_grain` to specify the granularity or level of detail for the offset, such as a day or hour. Something to note is that you can't use this with `offset_window`. The following example queries a metric with an hourly offset and a monthly grain: - -```yaml -- name: d7_booking_change - description: "Calculate bookings per hour for the current month" - type: derived - label: d7 Bookings Change - type_params: - expr: bookings * bookings_hourly - metrics: - - name: bookings - alias: current_bookings - - name: bookings - offset_to_grain: hour - alias: bookings_hourly -``` - -### Derived metric offset calculation - When you run the query `mf query --metrics d7_booking_change --group-by metric_time__month` for the metric, here's how it's calculated: 1. We retrieve the raw, unaggregated dataset with the specified measures and dimensions at the smallest level of detail, which is currently 'day'. @@ -167,7 +145,7 @@ When you run the query `mf query --metrics d7_booking_change --group-by metric_ | | Orders | Metric_time | | - | ---- | -------- | | | 329 | 2017-07-24 | -| | 6840 | 2017-07-23 to 2017-06-35 | +| | 6840 | 2017-07-23 to 2017-06-30 | | | 83 | 2017-06-24 | | Total | 7252 | 2017-07-01 | From bcdc8553e47e59f965a43187204af21ba740aba7 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 16:55:35 +0100 Subject: [PATCH 085/103] Update derived-metrics.md fix yaml indentations --- website/docs/docs/build/derived-metrics.md | 43 +++++++++++----------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/website/docs/docs/build/derived-metrics.md b/website/docs/docs/build/derived-metrics.md index 703c5ecf756..bef7346b353 100644 --- a/website/docs/docs/build/derived-metrics.md +++ b/website/docs/docs/build/derived-metrics.md @@ -93,16 +93,16 @@ The following example displays how you can calculate monthly revenue growth usin ```yaml - name: customer_retention - description: "Percentage of customers that are active now and those active 1 month ago" - label: customer_retention - type_params: - expr: (active_customers/ active_customers_prev_month) - metrics: - - name: active_customers - alias: current_active_customers - - name: active_customers - offset_window: 1 month - alias: active_customers_prev_month + description: Percentage of customers that are active now and those active 1 month ago + label: customer_retention + type_params: + expr: (active_customers/ active_customers_prev_month) + metrics: + - name: active_customers + alias: current_active_customers + - name: active_customers + offset_window: 1 month + alias: active_customers_prev_month ``` ### Offset windows and granularity @@ -111,18 +111,17 @@ You can query any granularity and offset window combination. The following examp ```yaml - name: d7_booking_change - description: "Difference between bookings now and 7 days ago" - type: derived - label: d7 Bookings Change - type_params: - expr: bookings - bookings_7_days_ago - metrics: - - name: bookings - alias: current_bookings - - name: bookings - offset_window: 7 days - alias: bookings_7_days_ago - + description: Difference between bookings now and 7 days ago + type: derived + label: d7 Bookings Change + type_params: + expr: bookings - bookings_7_days_ago + metrics: + - name: bookings + alias: current_bookings + - name: bookings + offset_window: 7 days + alias: bookings_7_days_ago ``` When you run the query `mf query --metrics d7_booking_change --group-by metric_time__month` for the metric, here's how it's calculated: From 56be8807f134ce77ce1996c06effead9747841d5 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 16:58:48 +0100 Subject: [PATCH 086/103] Update cloud-seats-and-users.md --- .../docs/docs/cloud/manage-access/cloud-seats-and-users.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md index 3e3b829cfbf..04dfbe093c3 100644 --- a/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md +++ b/website/docs/docs/cloud/manage-access/cloud-seats-and-users.md @@ -8,8 +8,8 @@ sidebar: "Users and licenses" In dbt Cloud, _licenses_ are used to allocate users to your account. There are three different types of licenses in dbt Cloud: - **Developer** — Granted access to the Deployment and [Development](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) functionality in dbt Cloud. -- **Read-Only** — Intended to view the [artifacts](/docs/deploy/artifacts) created in a dbt Cloud account, can receive job notifications but not configure them. -- **IT** — Can manage users, groups, and licenses, among other permissions. Can receive job notifications but not configure them. Available on Enterprise and Team plans only. +- **Read-Only** — Intended to view the [artifacts](/docs/deploy/artifacts) created in a dbt Cloud account. Read-Only users can receive job notifications but not configure them. +- **IT** — Can manage users, groups, and licenses, among other permissions. IT users can receive job notifications but not configure them. Available on Enterprise and Team plans only. The user's assigned license determines the specific capabilities they can access in dbt Cloud. From d1ae33efc19b30229bef9b8f031f5739112b5ef7 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 7 Sep 2023 17:21:49 +0100 Subject: [PATCH 087/103] Update derived-metrics.md removing 'This can't be used with `offset_to_grain`.' as it's not applicable now --- website/docs/docs/build/derived-metrics.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/derived-metrics.md b/website/docs/docs/build/derived-metrics.md index bef7346b353..2ad1c3e368c 100644 --- a/website/docs/docs/build/derived-metrics.md +++ b/website/docs/docs/build/derived-metrics.md @@ -21,7 +21,7 @@ In MetricFlow, derived metrics are metrics created by defining an expression usi | `metrics` | The list of metrics used in the derived metrics. | Required | | `alias` | Optional alias for the metric that you can use in the expr. | Optional | | `filter` | Optional filter to apply to the metric. | Optional | -| `offset_window` | Set the period for the offset window, such as 1 month. This will return the value of the metric one month from the metric time. This can't be used with `offset_to_grain`. | Required | +| `offset_window` | Set the period for the offset window, such as 1 month. This will return the value of the metric one month from the metric time. | Required | The following displays the complete specification for derived metrics, along with an example. From f6b740d1509c0d991ed860c20881fc50cdd12619 Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Thu, 7 Sep 2023 10:06:43 -0700 Subject: [PATCH 088/103] Update website/docs/docs/deploy/deploy-jobs.md --- website/docs/docs/deploy/deploy-jobs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/deploy/deploy-jobs.md b/website/docs/docs/deploy/deploy-jobs.md index 5287bfd3109..82abcb0548b 100644 --- a/website/docs/docs/deploy/deploy-jobs.md +++ b/website/docs/docs/deploy/deploy-jobs.md @@ -90,7 +90,7 @@ If you're interested in joining our beta, please fill out our Google Form to [si - **Environment Variables** — Define [environment variables](/docs/build/environment-variables) to customize the behavior of your project when the deploy job runs. - **Target Name** — Define theΒ [target name](/docs/build/custom-target-names) to customize the behavior of your project when the deploy job runs. Environment variables and target names are often used interchangeably. - **Run Timeout** — Cancel the deploy job if the run time exceeds the timeout value. - - **Compare changes against ** option β€” By default, it’s set to **No deferral**. For Deploy jobs, you can select either no deferral, deferral to an environment, or self defer (to the same job). + - **Compare changes against** β€” By default, it’s set to **No deferral**. Select either **Environment** or **This Job** to let dbt Cloud know what it should compare the changes against. :::info Older versions of dbt Cloud only allow you to defer to a specific job instead of an environment. Deferral to a job compares state against the project code that was run in the deferred job's last successful run. While deferral to an environment is more efficient as dbt Cloud will compare against the project representation (which is stored in the `manifest.json`) of the last successful deploy job run that executed in the deferred environment. By considering _all_ deploy jobs that run in the deferred environment, dbt Cloud will get a more accurate, latest project representation state. From b032b3d7a31a0eb135dc0ec8159523b936f891e2 Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Thu, 7 Sep 2023 10:07:21 -0700 Subject: [PATCH 089/103] Update website/docs/docs/deploy/deploy-jobs.md --- website/docs/docs/deploy/deploy-jobs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/deploy/deploy-jobs.md b/website/docs/docs/deploy/deploy-jobs.md index 82abcb0548b..ea510827a07 100644 --- a/website/docs/docs/deploy/deploy-jobs.md +++ b/website/docs/docs/deploy/deploy-jobs.md @@ -90,7 +90,7 @@ If you're interested in joining our beta, please fill out our Google Form to [si - **Environment Variables** — Define [environment variables](/docs/build/environment-variables) to customize the behavior of your project when the deploy job runs. - **Target Name** — Define theΒ [target name](/docs/build/custom-target-names) to customize the behavior of your project when the deploy job runs. Environment variables and target names are often used interchangeably. - **Run Timeout** — Cancel the deploy job if the run time exceeds the timeout value. - - **Compare changes against** β€” By default, it’s set to **No deferral**. Select either **Environment** or **This Job** to let dbt Cloud know what it should compare the changes against. + - **Compare changes against** — By default, it’s set to **No deferral**. Select either **Environment** or **This Job** to let dbt Cloud know what it should compare the changes against. :::info Older versions of dbt Cloud only allow you to defer to a specific job instead of an environment. Deferral to a job compares state against the project code that was run in the deferred job's last successful run. While deferral to an environment is more efficient as dbt Cloud will compare against the project representation (which is stored in the `manifest.json`) of the last successful deploy job run that executed in the deferred environment. By considering _all_ deploy jobs that run in the deferred environment, dbt Cloud will get a more accurate, latest project representation state. From 421a8bfbc2620c73c3c14ee7900c4e8a68dcd9bb Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Thu, 7 Sep 2023 10:07:47 -0700 Subject: [PATCH 090/103] Update website/docs/docs/deploy/deploy-jobs.md --- website/docs/docs/deploy/deploy-jobs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/deploy/deploy-jobs.md b/website/docs/docs/deploy/deploy-jobs.md index ea510827a07..56b8400213f 100644 --- a/website/docs/docs/deploy/deploy-jobs.md +++ b/website/docs/docs/deploy/deploy-jobs.md @@ -90,7 +90,7 @@ If you're interested in joining our beta, please fill out our Google Form to [si - **Environment Variables** — Define [environment variables](/docs/build/environment-variables) to customize the behavior of your project when the deploy job runs. - **Target Name** — Define theΒ [target name](/docs/build/custom-target-names) to customize the behavior of your project when the deploy job runs. Environment variables and target names are often used interchangeably. - **Run Timeout** — Cancel the deploy job if the run time exceeds the timeout value. - - **Compare changes against** — By default, it’s set to **No deferral**. Select either **Environment** or **This Job** to let dbt Cloud know what it should compare the changes against. + - **Compare changes against** — By default, it’s set to **No deferral**. Select either **Environment** or **This Job** to let dbt Cloud know what it should compare the changes against. :::info Older versions of dbt Cloud only allow you to defer to a specific job instead of an environment. Deferral to a job compares state against the project code that was run in the deferred job's last successful run. While deferral to an environment is more efficient as dbt Cloud will compare against the project representation (which is stored in the `manifest.json`) of the last successful deploy job run that executed in the deferred environment. By considering _all_ deploy jobs that run in the deferred environment, dbt Cloud will get a more accurate, latest project representation state. From eb27b39a7c7759849b5c3d3f17726ec36805fabc Mon Sep 17 00:00:00 2001 From: Doug Beatty <44704949+dbeatty10@users.noreply.github.com> Date: Thu, 7 Sep 2023 18:04:38 -0600 Subject: [PATCH 091/103] Remove-1.6 from future releases table --- website/snippets/core-versions-table.md | 1 - 1 file changed, 1 deletion(-) diff --git a/website/snippets/core-versions-table.md b/website/snippets/core-versions-table.md index fb2e2a5d60e..5832f9f14c3 100644 --- a/website/snippets/core-versions-table.md +++ b/website/snippets/core-versions-table.md @@ -17,7 +17,6 @@ _Future release dates are tentative and subject to change._ | dbt Core | Planned Release | Critical & dbt Cloud Support Until | |----------|-----------------|-------------------------------------| -| **v1.6** | _July 2023_ | _July 2024_ | | **v1.7** | _Oct 2023_ | _Oct 2024_ | | **v1.8** | _Jan 2024_ | _Jan 2025_ | | **v1.9** | _Apr 2024_ | _Apr 2025_ | From 875a43b7b4bf277a76fbb473b49c2d2acf57ce6f Mon Sep 17 00:00:00 2001 From: Jeremy Cohen Date: Fri, 8 Sep 2023 12:20:07 +0200 Subject: [PATCH 092/103] Update use cases for 'local' packages --- website/docs/docs/build/packages.md | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md index d4cebc7a6f0..fd3ebebfac0 100644 --- a/website/docs/docs/build/packages.md +++ b/website/docs/docs/build/packages.md @@ -284,18 +284,35 @@ packages: ### Local packages -Packages that you have stored locally can be installed by specifying the path to the project, like so: +A "local" package is another dbt project that you have access to from within the local file system. It can be installed by specifying the path to the project. The best-supported pattern is when the project is nested within a subdirectory, relative to the current project's directory. ```yaml packages: - - local: /opt/dbt/redshift # use a local path + - local: relative/path/to/subdirectory ``` -Local packages should only be used for specific situations, for example, when testing local changes to a package. +Other patterns may work in some cases, but not always. For example, if you install this project as a package elsewhere, or try running it on a different system, the relative and absolute paths will yield the same results. + + + +```yaml +packages: + # not recommended - these support for these patterns vary + - local: /../../redshift # relative path to a parent directory + - local: /opt/dbt/redshift # absolute path on the system +``` + + + +As such, there are a few specific use cases where a "local" package is recommended: +1. A monorepo containing multiple projects, each nested in a subdirectory. "Local" packages enable combining projects together, for coordinated development and deployment. +2. Testing changes to one project/package within the context of a downstream project/package that uses it. By temporarily switching the installation to a "local" package, you can make changes to the former and immediately test them in the latter, enabling quicker iteration. This is similar to [editable installs](https://pip.pypa.io/en/stable/topics/local-project-installs/) in Python. +3. A nested project that defines fixtures and tests for a project of utility macros β€”Β for example, [the integration tests within the `dbt-utils` package](https://github.com/dbt-labs/dbt-utils/tree/main/integration_tests) + ## What packages are available? Check out [dbt Hub](https://hub.getdbt.com) to see the library of published dbt packages! From 768dbd7def72a363214344edd566351ab5cbda18 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 8 Sep 2023 13:27:15 +0100 Subject: [PATCH 093/103] Update website/docs/docs/build/packages.md --- website/docs/docs/build/packages.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md index fd3ebebfac0..3f24e10bdfe 100644 --- a/website/docs/docs/build/packages.md +++ b/website/docs/docs/build/packages.md @@ -284,7 +284,7 @@ packages: ### Local packages -A "local" package is another dbt project that you have access to from within the local file system. It can be installed by specifying the path to the project. The best-supported pattern is when the project is nested within a subdirectory, relative to the current project's directory. +A "local" package is a dbt project accessible from your local file system. You can install it by specifying the project's path. It works best when you nest the project within a subdirectory relative to your current project's directory. From 696f025dc541eae36c79481e93131ec27e2dfd45 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 8 Sep 2023 13:27:30 +0100 Subject: [PATCH 094/103] Update website/docs/docs/build/packages.md --- website/docs/docs/build/packages.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md index 3f24e10bdfe..833dbfcee63 100644 --- a/website/docs/docs/build/packages.md +++ b/website/docs/docs/build/packages.md @@ -301,7 +301,7 @@ Other patterns may work in some cases, but not always. For example, if you insta ```yaml packages: - # not recommended - these support for these patterns vary + # not recommended - the support for these patterns vary - local: /../../redshift # relative path to a parent directory - local: /opt/dbt/redshift # absolute path on the system ``` From d31edceca2ff8b7ebc294de63a7504fa6ba47be6 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 8 Sep 2023 13:27:42 +0100 Subject: [PATCH 095/103] Update website/docs/docs/build/packages.md --- website/docs/docs/build/packages.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md index 833dbfcee63..851e898d5bd 100644 --- a/website/docs/docs/build/packages.md +++ b/website/docs/docs/build/packages.md @@ -308,7 +308,7 @@ packages: -As such, there are a few specific use cases where a "local" package is recommended: +There are a few specific use cases where we recommend using a "local" package: 1. A monorepo containing multiple projects, each nested in a subdirectory. "Local" packages enable combining projects together, for coordinated development and deployment. 2. Testing changes to one project/package within the context of a downstream project/package that uses it. By temporarily switching the installation to a "local" package, you can make changes to the former and immediately test them in the latter, enabling quicker iteration. This is similar to [editable installs](https://pip.pypa.io/en/stable/topics/local-project-installs/) in Python. 3. A nested project that defines fixtures and tests for a project of utility macros β€”Β for example, [the integration tests within the `dbt-utils` package](https://github.com/dbt-labs/dbt-utils/tree/main/integration_tests) From 495b3ace5de4b909e2c8c79e2601426b1ba75a2d Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 8 Sep 2023 13:27:49 +0100 Subject: [PATCH 096/103] Update website/docs/docs/build/packages.md --- website/docs/docs/build/packages.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md index 851e898d5bd..c8f980e45f2 100644 --- a/website/docs/docs/build/packages.md +++ b/website/docs/docs/build/packages.md @@ -309,7 +309,7 @@ packages: There are a few specific use cases where we recommend using a "local" package: -1. A monorepo containing multiple projects, each nested in a subdirectory. "Local" packages enable combining projects together, for coordinated development and deployment. +1. **Monorepo** — When you have multiple projects, each nested in a subdirectory, within a monorepo. "Local" packages allow you to combine projects for coordinated development and deployment. 2. Testing changes to one project/package within the context of a downstream project/package that uses it. By temporarily switching the installation to a "local" package, you can make changes to the former and immediately test them in the latter, enabling quicker iteration. This is similar to [editable installs](https://pip.pypa.io/en/stable/topics/local-project-installs/) in Python. 3. A nested project that defines fixtures and tests for a project of utility macros β€”Β for example, [the integration tests within the `dbt-utils` package](https://github.com/dbt-labs/dbt-utils/tree/main/integration_tests) From 6fbb7dbfdd7a9806a5cd3308e6181fd8c613ac54 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 8 Sep 2023 13:27:56 +0100 Subject: [PATCH 097/103] Update website/docs/docs/build/packages.md --- website/docs/docs/build/packages.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md index c8f980e45f2..12ae7d916ca 100644 --- a/website/docs/docs/build/packages.md +++ b/website/docs/docs/build/packages.md @@ -311,7 +311,7 @@ packages: There are a few specific use cases where we recommend using a "local" package: 1. **Monorepo** — When you have multiple projects, each nested in a subdirectory, within a monorepo. "Local" packages allow you to combine projects for coordinated development and deployment. 2. Testing changes to one project/package within the context of a downstream project/package that uses it. By temporarily switching the installation to a "local" package, you can make changes to the former and immediately test them in the latter, enabling quicker iteration. This is similar to [editable installs](https://pip.pypa.io/en/stable/topics/local-project-installs/) in Python. -3. A nested project that defines fixtures and tests for a project of utility macros β€”Β for example, [the integration tests within the `dbt-utils` package](https://github.com/dbt-labs/dbt-utils/tree/main/integration_tests) +3. **Nested project** — When you have a nested project that defines fixtures and tests for a project of utility macros, like [the integration tests within the `dbt-utils` package](https://github.com/dbt-labs/dbt-utils/tree/main/integration_tests). ## What packages are available? From 6e522feb2959d961cc4b5f7a0bf8c67bc754642d Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 8 Sep 2023 13:28:01 +0100 Subject: [PATCH 098/103] Update website/docs/docs/build/packages.md --- website/docs/docs/build/packages.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md index 12ae7d916ca..4d0bfd808a1 100644 --- a/website/docs/docs/build/packages.md +++ b/website/docs/docs/build/packages.md @@ -310,7 +310,7 @@ packages: There are a few specific use cases where we recommend using a "local" package: 1. **Monorepo** — When you have multiple projects, each nested in a subdirectory, within a monorepo. "Local" packages allow you to combine projects for coordinated development and deployment. -2. Testing changes to one project/package within the context of a downstream project/package that uses it. By temporarily switching the installation to a "local" package, you can make changes to the former and immediately test them in the latter, enabling quicker iteration. This is similar to [editable installs](https://pip.pypa.io/en/stable/topics/local-project-installs/) in Python. +2. **Testing changes** — To test changes in one project or package within the context of a downstream project or package that uses it. By temporarily switching the installation to a "local" package, you can make changes to the former and immediately test them in the latter for quicker iteration. This is similar to [editable installs](https://pip.pypa.io/en/stable/topics/local-project-installs/) in Python. 3. **Nested project** — When you have a nested project that defines fixtures and tests for a project of utility macros, like [the integration tests within the `dbt-utils` package](https://github.com/dbt-labs/dbt-utils/tree/main/integration_tests). From 9d4f4912e0c6fabf598b7a931a93eec4ea336b00 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 8 Sep 2023 13:28:15 +0100 Subject: [PATCH 099/103] Update website/docs/docs/build/packages.md --- website/docs/docs/build/packages.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md index 4d0bfd808a1..97e8784416e 100644 --- a/website/docs/docs/build/packages.md +++ b/website/docs/docs/build/packages.md @@ -301,7 +301,7 @@ Other patterns may work in some cases, but not always. For example, if you insta ```yaml packages: - # not recommended - the support for these patterns vary + # not recommended - support for these patterns vary - local: /../../redshift # relative path to a parent directory - local: /opt/dbt/redshift # absolute path on the system ``` From fc1e487621c329ea14d47a0a8733332feb84b0cf Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Fri, 8 Sep 2023 15:31:02 -0400 Subject: [PATCH 100/103] Update website/docs/docs/core/connect-data-platform/spark-setup.md --- website/docs/docs/core/connect-data-platform/spark-setup.md | 1 - 1 file changed, 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md index 5d74a932c45..b22416fd3a5 100644 --- a/website/docs/docs/core/connect-data-platform/spark-setup.md +++ b/website/docs/docs/core/connect-data-platform/spark-setup.md @@ -230,7 +230,6 @@ connect_retries: 3 - ### Server side configuration From 2869a9de133ba56831ea4466ad52c50ee4c1ddeb Mon Sep 17 00:00:00 2001 From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com> Date: Fri, 8 Sep 2023 12:33:19 -0700 Subject: [PATCH 101/103] Update ref.md (#4052) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## What are you changing in this pull request and why? Adding that you need to set cross project dependencies when using 2 arguments with projects. Per feedback from @b-per πŸ™‡πŸ» Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> --- website/docs/reference/dbt-jinja-functions/ref.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/dbt-jinja-functions/ref.md b/website/docs/reference/dbt-jinja-functions/ref.md index b9b14bed42a..6df06a2f415 100644 --- a/website/docs/reference/dbt-jinja-functions/ref.md +++ b/website/docs/reference/dbt-jinja-functions/ref.md @@ -69,7 +69,7 @@ select * from {{ ref('model_name') }} ### Two-argument variant -There is also a two-argument variant of the `ref` function. With this variant, you can pass both a namespace (project or package) and model name to `ref` to avoid ambiguity. +You can also use a two-argument variant of the `ref` function. With this variant, you can pass both a namespace (project or package) and model name to `ref` to avoid ambiguity. When using two arguments with projects (not packages), you also need to set [cross project dependencies](/docs/collaborate/govern/project-dependencies). ```sql select * from {{ ref('project_or_package', 'model_name') }} From 82ed4eab6fa9735d6062eeb01a125c25f6f8f2a8 Mon Sep 17 00:00:00 2001 From: Alex Python Date: Fri, 8 Sep 2023 16:10:45 -0400 Subject: [PATCH 102/103] Update Databricks Section of grants.md Add bullet point for Databricks specific grants that aren't supported by dbt grants --- website/docs/reference/resource-configs/grants.md | 1 + 1 file changed, 1 insertion(+) diff --git a/website/docs/reference/resource-configs/grants.md b/website/docs/reference/resource-configs/grants.md index 68d1e6eb14e..3a65672fa5e 100644 --- a/website/docs/reference/resource-configs/grants.md +++ b/website/docs/reference/resource-configs/grants.md @@ -243,6 +243,7 @@ models: - Databricks automatically enables `grants` on SQL endpoints. For interactive clusters, admins should enable grant functionality using these two setup steps in the Databricks documentation: - [Enable table access control for your workspace](https://docs.databricks.com/administration-guide/access-control/table-acl.html) - [Enable table access control for a cluster](https://docs.databricks.com/security/access-control/table-acls/table-acl.html) +- In order to grant `READ_METADATA` or `USAGE`, use [post-hooks](https://docs.getdbt.com/reference/resource-configs/pre-hook-post-hook)
From 32d7fdf6f395178ee42b443c9c3ebb5a490d4691 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 11 Sep 2023 12:18:23 +0100 Subject: [PATCH 103/103] Update website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md --- .../semantic-layer-3-build-semantic-models.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md b/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md index 73fa2363aaf..a2dc55e37ae 100644 --- a/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md +++ b/website/docs/guides/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md @@ -150,7 +150,7 @@ from source dimensions: - name: ordered_at expr: date_trunc('day', ordered_at) - # use date_trunc(ordered_at, DAY) if using BigQuery + # use date_trunc(ordered_at, DAY) if using [BigQuery](/docs/build/dimensions#time) type: time type_params: time_granularity: day