Fix snapshot_check_all_get_existing_columns: use adapter.get_columns_in_relation #5232
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
#5223 caused failing tests overnight in
dbt-spark
+dbt-databricks
The code was written to use the
get_columns_in_relation
macro, rather than theget_columns_in_relation
adapter method (Python). We should aim for consistency between these—but elsewhere indbt-core
, we do call it asadapter.get_columns_in_relation
, so we should do the same here.Here's the breakdown:
dbt-bigquery
, the macro redirects to the adapter method, since all the actual logic is in Python (no SQL involved): sourcedbt-spark
, the logic is written in both Python and SQL. The adapter method calls the macro, but it also does some of its own post-processing, which is missed if you just call the macro directly. I'll open a follow-up PR to rework this for consistency, but the change in this PR should be sufficient to get tests passing again.Checklist
changie new
to create a changelog entry