-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Existing dynamic tables attempted to rebuild instead of issuing a fast alter
and moving on
#1017
Comments
Note that the issue came back. I'm not sure if folks have access to dbt Cloud run logs, but I will share two runs. This run is from yesterday and you can see the
I tried locally and once again dbt is attempting to recreate the entire table. Not sure if y'all have access to dbt Cloud customer logs, so I'm happy to hop on a call if that would be useful. |
It appears that Snowflake was toggling the |
@nydnarb can you test the proposed change in #1016 by adding this macro (with the change on L45) to your project's {% macro snowflake__get_catalog_tables_sql(information_schema) -%}
select
table_catalog as "table_database",
table_schema as "table_schema",
table_name as "table_name",
case
when (is_dynamic = 'YES' OR is_dynamic = 'Y') and table_type = 'BASE TABLE' THEN 'DYNAMIC TABLE'
else table_type
end as "table_type",
comment as "table_comment",
-- note: this is the _role_ that owns the table
table_owner as "table_owner",
'Clustering Key' as "stats:clustering_key:label",
clustering_key as "stats:clustering_key:value",
'The key used to cluster this table' as "stats:clustering_key:description",
(clustering_key is not null) as "stats:clustering_key:include",
'Row Count' as "stats:row_count:label",
row_count as "stats:row_count:value",
'An approximate count of rows in this table' as "stats:row_count:description",
(row_count is not null) as "stats:row_count:include",
'Approximate Size' as "stats:bytes:label",
bytes as "stats:bytes:value",
'Approximate size of the table as reported by Snowflake' as "stats:bytes:description",
(bytes is not null) as "stats:bytes:include",
'Last Modified' as "stats:last_modified:label",
to_varchar(convert_timezone('UTC', last_altered), 'yyyy-mm-dd HH24:MI'||'UTC') as "stats:last_modified:value",
'The timestamp for last update/change' as "stats:last_modified:description",
(last_altered is not null and table_type='BASE TABLE') as "stats:last_modified:include"
from {{ information_schema }}.tables
{%- endmacro %} |
I just did the following. First, I made sure that, in my Then, I added the above macro to the Then, I updated the dynamic table model so that is only looked at the last three days worth of data so it doesn't take 45 minutes to materialize (e.g., Then,
Checking
This behavior is as expected. Success! But... Then, I try running it again:
Checking
Looks like the error came from the For completeness:
|
We're seeing the same error with the |
The issue has been identified. dbt uses a |
@nydnarb Thank you so much for the thorough issue & follow-up comments!
I agree, and I am thinking that we might be able to avoid this error by fully qualifying the name within the
This is the issue that we are tracking down with Snowflake. We would like to be able to reliably detect whether an object is already a |
I've opened a separate bug for the unqualified renaming: I'd also like to share what I believe to be a viable workaround, while we sort this out with Snowflake: I'm going to close this issue as a duplicate in the meantime. |
Is this a new bug in dbt-snowflake?
Current Behavior
Note that this issue resolved itself, so it might not be a bug anymore.
Consider this dynamic table:
This model has been materialized and when dbt Cloud does our daily build, it does not need to build this table. If I inspect run artifacts in the
target/run
folder, I see this (which aligns with my expectation):However, on May 2nd, we had an error with this model.
Upon investigating the run artifacts, I saw the following SQL scripts (reformatted and unchanged):
So, the error occurred because of the unqualified name
MONTHLY__dbt_backup
. However, the bigger issue is why dbt was attempting to rebuild a dynamic table that was already materialized?I'm not sure exactly how dbt determines if a dynamic table is already materialized but it could be related to the error. For example, if dbt failed to "see" the materialized dynamic table because it was checking an unqualified location that didn't match the appropriate location, then perhaps dbt thought it had to materialize this dynamic table for the first time. Note that @jeremyyeo shared this bug in Slack while we were discussing the issue: #1016
I confirmed this behavior on dbt-snowflake 1.7.3 and dbt-core 1.7.13. Also, I confirmed that snowflake bundle 2024_03 was disabled. This issue occurred on dbt Cloud and locally in a
python
Docker container.However, as of May 3rd, jobs in dbt Cloud are running successfully. When I check the artifacts for
aws_cost_reports_monthly
, I see the appropriatealter
statement.Here is a link to the Slack discussion: #1016
Expected Behavior
The issue is resolved and expected behavior has returned. That is, when a dynamic table is already materialized and dbt Cloud builds it, it issues an
alter
command.However, I am concerned about this part of the materialization:
It relies on default behavior in Snowflake to infer the schema and database. Since that behavior is subject to change, dbt might want to be more explicit here.
Steps To Reproduce
I can't reproduce the issue because it appears to be resolved.
Relevant log output
Environment
The text was updated successfully, but these errors were encountered: