PR #44 includes the following updates:
-
Introduces variable
iterable__using_event_extension
to allow theevent_extension
table to be disabled and exclude its field,experiment_id
, from persisting downstream. This permits the downstream models to run even if the sourceevent_extension
table does not exist. By default the variable is set to True. If you don't have this table, you will need to setiterable__using_event_extension
to False. For more information on how to configure theiterable__using_event_extension
variable, refer to the README.- This will be a breaking change if you choose to disable the
event_extension
table asexperiment_id
will be removed from downstream models. Conversely, if you wish to include theexperiment_id
grain, ensure thatiterable__using_event_extension
is not explicitly set to False. - Following this, the uniqueness tests in related models have been updated to account for whether
iterable__using_event_extension
is enabled/disabled by now relying on new surrogate keys:unique_campaign_version_id
: Unique identifier for theiterable__campaigns
model that combinescampaign_id
,template_id
, and if available,experiment_id
.- unique_user_campaign_id: Unique identifier for the
iterable__user_campaign
model that combinesunique_user_key
,campaign_id
, and if available,experiment_id
.
- This will be a breaking change if you choose to disable the
-
Persists
user_history
passthrough columns, as stipulated via theiterable_user_history_pass_through_columns
variable, through to theiterable__users
model. For more information on how to configure theiterable_user_history_pass_through_columns
variable, refer to the README.
- Updates logic in
int_iterable__campaign_event_metrics
,iterable__events
, anditerable__user_campaign
to account for theiterable__using_event_extension
variable being disabled or enabled. If disabled,experiment_id
will not show up as a grain. - Addition of integrity and consistency validation tests within integration tests pertaining to the
iterable__user_unsubscriptions
,iterable__campaigns
,iterable__events
,iterable__user_campaign
, anditerable_users
models. - Updated seed data to ensure proper testing of the latest v0.8.1
dbt_iterable_source
release in addition to testing of the pass_through column features. - Updated pull request and issue templates.
- Included auto-releaser GitHub Actions workflow to automate future releases.
PR #39 includes updates in response to the Aug 2023 updates for the Iterable connector.
For changes in the upstream staging models, refer to the dbt_iterable_source changelog and respective PR #28.
- Introduced a new user key
unique_user_key
. If you are syncing the new schema from Iterable, this will be_fivetran_user_id
, generated from hashinguser_id
and/oremail
, depending on project type. Otherwise, this isemail
, the user identifier for email-based projects and was the previous unique user key used in the old schema.- Models that have previously used
email
as a grain or as a join field have been updated to useunique_user_key
.
- Models that have previously used
- The grain in
iterable__events
was previously onevent_id
but is nowunique_event_id
. This is a generated surrogate key fromevent_id
and_fivetran_user_id
. Due to the Iterable Aug 2023 updates, previously the unique key for events was justevent_id
, but now the unique keys involves a combination ofevent_id
and_fivetran_user_id
, if it exists. - We have removed
user_device
related fields as we removed the underlying object.
- Added the passthrough columns functionality for
event_extension
anduser_history
source tables. You will see these additional columns persist through the enditerable__events
anditerable__users
models. For instructions on leveraging this feature, refer to the README.- Notice: A
dbt run --full-refresh
is required each time these variables are edited.
- Notice: A
- Updated the tests for uniqueness that were using
email
tounique_user_key
. - The unique test in
iterable__events
now tests onunique_event_id
instead ofevent_id
. - The unique test in
iterable__user_unsubscriptions
now tests onunique_user_key, message_type_id, channel_id,
andis_unsubscribed_channel_wide
.
PR #34 includes the following updates:
- Added additional join on
template_id
initerable__campaigns
so the proper grain is being reflected. - Updated
dbt_utils.unique_combination_of_columns
test oniterable__campaigns
to includetemplate_id
.
- Adjusted intermediate model logic in
int_iterable__campaign_event_metrics
to correctly count unique totals based off of distinct email values foriterable__campaigns
. - Added additional join on
template_id
inint_iterable__recurring_campaigns
to resolve a data fanout issue
PR #33 includes the following update:
- Updated intermediate model
int_iterable__list_user_unnest
to make sure empty array-rows are not removed for all warehouses. - Bigquery and Snowflake users: this affects downstream models
iterable__users
anditerable__list_user_history
. We recommend usingdbt run --full-refresh
the next time you run your project.
PR #30 includes the following updates:
- Updated the incremental strategy for end model
iterable__events
:- For Bigquery, Spark, and Databricks, the strategy has been updated to
insert_overwrite
. - For Snowflake, Redshift, and PostgreSQL, the strategy has been updated to
delete+insert
. - We recommend running
dbt run --full-refresh
the next time you run your project.
- For Bigquery, Spark, and Databricks, the strategy has been updated to
- Databricks compatibility for Runtime 12.2 or later.
- Note some models may run with an earlier runtime, however 12.2 or later is required to run all models. This is because of syntax changes from earlier versions for use with arrays and JSON.
- We also recommend using the
dbt-databricks
adapter overdbt-spark
because each adapter handles incremental models differently. If you must use thedbt-spark
adapter and run into issues, please refer to this section found in dbt's documentation of Spark configurations.
PR #27 includes the following updates:
- Incorporated the new
fivetran_utils.drop_schemas_automation
macro into the end of each Buildkite integration test job. - Updated the pull request templates.
PR #28 adds the following changes:
- Adjusts the default materialization of
int_iterable__list_user_history
from a view to a table. This was changed to optimize the runtime of the downstreamint_iterable__list_user_unnest
model. - Updates
int_iterable__list_user_unnest
to be materialized as an incremental table. In order to add this logic, we also added a newunique_key
field -- a surrogate key hashed onemail
,list_id
, andupdated_at
-- and adate_day
field to partition by on Bigquery + Databricks.- You will need to run a full refresh first to pick up the new columns.
- Adds a
coalesce
toprevious_email_ids
in theint_iterable__list_user_history
model, in case there are no previous email ids. - Adjusts the
flatten
logic inint_iterable__list_user_unnest
for Snowflake users.
- Added
iterable_[source_table_name]_identifier
variables to the source package to allow easier flexibility of the package to refer to source tables with different names. - Note! For the table
campaign_suppression_list_history
, the identifier variable has been updated fromiterable__campaign_suppression_list_history_table
toiterable_campaign_suppression_list_history_identifier
to align with the current naming convention. If you are using the former variable in yourdbt_project.yml
, you will need to update it for the package to run. (#25)
- Updated README with identifier instructions and format update. (#25)
PR #18 includes the following breaking changes:
- Dispatch update for dbt-utils to dbt-core cross-db macros migration. Specifically
{{ dbt_utils.<macro> }}
have been updated to{{ dbt.<macro> }}
for the below macros:any_value
bool_or
cast_bool_to_text
concat
date_trunc
dateadd
datediff
escape_single_quotes
except
hash
intersect
last_day
length
listagg
position
replace
right
safe_cast
split_part
string_literal
type_bigint
type_float
type_int
type_numeric
type_string
type_timestamp
array_append
array_concat
array_construct
- For
current_timestamp
andcurrent_timestamp_in_utc
macros, the dispatch AND the macro names have been updated to the below, respectively:dbt.current_timestamp_backcompat
dbt.current_timestamp_in_utc_backcompat
- Dependencies on
fivetran/fivetran_utils
have been upgraded, previously[">=0.3.0", "<0.4.0"]
now[">=0.4.0", "<0.5.0"]
. - Incremental strategy within
iterable__events
has been modified to use delete+insert for Redshift and Postgres warehouses.
- Introduced variable
iterable__using_campaign_suppression_list_history
to disable related downtream portions if the underlying source table does not existed. For how to configure refer to the README. - Specifically, we have added conditional blocks to relevant portions of
int_iterable__campaign_lists
if the underlyingstg_iterable__campaign_suppression_list_history
is not materialized wheniterable__using_campaign_suppression_list_history
is disabled. (#22)
Thank you @awpharr for raising these to our attention! (#19)
🎉 dbt v1.0.0 Compatibility 🎉
- Adjusts the
require-dbt-version
to now be within the range [">=1.0.0", "<2.0.0"]. Additionally, the package has been updated for dbt v1.0.0 compatibility. If you are using a dbt version <1.0.0, you will need to upgrade in order to leverage the latest version of the package.- For help upgrading your package, I recommend reviewing this GitHub repo's Release Notes on what changes have been implemented since your last upgrade.
- For help upgrading your dbt project to dbt v1.0.0, I recommend reviewing dbt-labs upgrading to 1.0.0 docs for more details on what changes must be made.
- Upgrades the package dependency to refer to the latest
dbt_iterable_source
. Additionally, the latestdbt_iterable_source
package has a dependency on the latestdbt_fivetran_utils
. Further, the latestdbt_fivetran_utils
package also has a dependency ondbt_utils
[">=0.8.0", "<0.9.0"].- Please note, if you are installing a version of
dbt_utils
in yourpackages.yml
that is not in the range above then you will encounter a package dependency error.
- Please note, if you are installing a version of
Refer to the relevant release notes on the Github repository for specific details for the previous releases. Thank you!