Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CT-1123] Adapter pre_model_hook + post_model_hook for tests and compilation, too #212

Closed
jtcohen6 opened this issue Sep 6, 2022 · 22 comments
Assignees
Labels
enhancement New feature or request help_wanted Extra attention is needed paper_cut A small change that impacts lots of users in their day-to-day

Comments

@jtcohen6
Copy link
Contributor

jtcohen6 commented Sep 6, 2022

Adapting from dbt-labs/dbt-snowflake#23 (comment):

Let's talk about adapter.pre_model_hook + adapter.post_model_hook.

Background

Here's where they're triggered to run, right before and after a model materialization:

https://github.com/dbt-labs/dbt-core/blob/8c8be687019014ced9be37c084f944205fc916ab/core/dbt/task/run.py#L279-L283

These are different from user-provided pre-hook and post-hook, which run within materializations. (I wish we named these things a bit more distinctly). They are also no-ops by default:

https://github.com/dbt-labs/dbt-core/blob/8c8be687019014ced9be37c084f944205fc916ab/core/dbt/adapters/base/impl.py#L1093-L1116

For certain adapter plugins, these "internal" hooks are the appropriate mechanism for database-specific behavior that needs to wrap a node's execution. For instance, on dbt-snowflake, this is where we turn the snowflake_warehouse config into a use warehouse command. @dataders and I were just discussing the same principle for use database (compute and storage) in serverless Azure Synapse (?).

Current limitations

  • These hooks are called for models, seeds, and snapshots, but not for tests, simply because the TestRunner does not inherit from the ModelRunner or extend its execute method (where adapter.pre_model_hook + adapter.post_model_hook get called). We've gotten the request to support snowflake_warehouse on tests several times.
  • While these hooks appropriately wrap all queries called during materialization, they do not seem to wrap compilation. Introspective queries called during compilation (e.g. run_query) will use the default warehouse (target.warehouse or user/role-configured), rather than the value of snowflake_warehouse configured on the model.

Why improve this

  • Consistency. This is what users expect to happen. Unified query history, so that all queries associated with a model actually use that model's configured compute.
  • It's desirable to configure specific tests to run with beefier or lesser compute, just as with models.
@jtcohen6 jtcohen6 added the enhancement New feature or request label Sep 6, 2022
@github-actions github-actions bot changed the title Adapter pre_model_model + post_model_hook for tests and compilation, too [CT-1123] Adapter pre_model_model + post_model_hook for tests and compilation, too Sep 6, 2022
@jtcohen6 jtcohen6 changed the title [CT-1123] Adapter pre_model_model + post_model_hook for tests and compilation, too [CT-1123] Adapter pre_model_hook + post_model_hook for tests and compilation, too Nov 14, 2022
@Kratzbaum92
Copy link

Any updates on this topic?

@pei0804
Copy link

pei0804 commented Feb 20, 2023

Do you plan to resolve this issue?

@mimoyer21
Copy link

mimoyer21 commented May 3, 2023

@jtcohen6 Any update on timeline for this one? This would be a big help.

@iknox-fa
Copy link

per BLG-- Maybe use protocols?

@rd144
Copy link

rd144 commented May 26, 2023

@jtcohen6 Is there any update on this? We're looking to reduce the snowflake costs of our pipelines and this functionality would support us in that.

@dbeatty10
Copy link
Contributor

Thank you to all of you that have asked about this recently -- it's helpful for us to see the interest level.

While we don't currently have a timeline for implementing this feature, we are still interested in:

  • improving consistency & aligning with user expectations
  • being able to configure the compute used for specific tests

@dbeatty10 dbeatty10 removed the triage label Jun 4, 2023
@jtcohen6 jtcohen6 assigned aranke and unassigned aranke Jul 19, 2023
@SoumayaMauthoorMOJ
Copy link

SoumayaMauthoorMOJ commented Oct 2, 2023

I’ve created a dbt project to demonstrate how to implement WAP on table materializations using “dummy” post-test-hooks:

  1. Create a ‘wap’ version of your model:
--my_table_wap.sql
{{ config(
    table_type='iceberg',
    s3_data_naming='unique',
    s3_data_dir=transform_table.generate_s3_location(),
) }}

SELECT 1 AS id
  1. Create a view model that references the wap model, and then deletes the view and renames the wap table as post-hook:
--my_table.sql
{{ config(
    materialized='view',
    post_hook=[ 
        "DROP VIEW {{ this }}",
        "{{ rename_relation(ref('my_table_wap'),this) }}",
    ],
) }}

SELECT * FROM {{ref('my_table_wap')}}

It applies the approach to the example jaffle_shop_db dbt project which uses the dbt-duckdb adapter.

Would this feature replace the need for these “dummy” post-test-hooks? Otherwise I'm happy to raise a ticket for implementing post-test-hooks since my approach is just a work-around

@dbeatty10
Copy link
Contributor

@SoumayaMauthoorMOJ cool that you were able to create an example of apply Write-Audit-Publish (WAP) with dbt build!

This issue (#212) seems focused on enabling use-cases like use warehouse in dbt-snowflake for dbt tests rather than WAP.

Rather, what you are asking about seems like it is covered by this Discussion: dbt-labs/dbt-core#5687. If applicable, do you want to add your thoughts or questions to that Discussion?

@SoumayaMauthoorMOJ
Copy link

@dbeatty10 thanks for clarifying. I did add a comment already (see here) but I didn't get a response so I thought an issue might be a good way to go to push the idea forward?

@dbeatty10
Copy link
Contributor

@SoumayaMauthoorMOJ Ah, I see that now! dbt-labs/dbt-core#5687 is still the best place at this stage, and it would also be the perfect place to share this!

If you are looking to instigate further discussion, you could try posing some questions that invite the community to think and interact with your idea.

e.g., you could ask folks for feedback on the pros/cons of your approach. You could also ask if anyone has ideas how to enable that pattern in dbt-core without the need for the “dummy” post-test-hooks.

@vskarine
Copy link

Hi,
has there been any updates on this issue? Alternative suggestions are also welcome. We are trying to get test to run on the same warehouse as models to optimize time and cost.
Thank you!

@dbeatty10
Copy link
Contributor

@vskarine We're still interested in this feature, but we don't currently have a timeline for implementing it.

@vskarine
Copy link

@vskarine We're still interested in this feature, but we don't currently have a timeline for implementing it.

Thanks for the update. I guess for now the only way for us to do it is to change warehouse size in the profile before each pipeline runs.

@bjarneschroeder
Copy link

Hey @dbeatty10,
although it will probably take some time, I would like to try to work on this. 😊

@dbeatty10
Copy link
Contributor

Awesome @bjarneschroeder ! 🏆

Give it a go and let us know if you need any help along the way.

@dbeatty10 dbeatty10 added the help_wanted Extra attention is needed label Feb 2, 2024
@bjarneschroeder
Copy link

Hey @dbeatty10, quick update:

I'm on it, but I found that it takes me more time than I expected to understand the overall structure of dbt and how different parts of the project interact with each other under the hood. After diving deeper into the project and playing around with a custom project and implementing some first changes, I wanted to check out test cases which currently test the hook execution for the run command, so I can add similiar cases for the test command. And although I can generally run tests successfully with make. I currently struggle to find good cases to debug, so I can play around with the internal program state during a specific time in the execution.

I found what looks like appropriate test cases which were in dbt/tests/adapter but I struggled to execute those. I then just found out that they were removed in this commit and that some restructuring of tests is going on by finding out about dbt-labs/dbt-core#9513 and then seeing in the dbt-postgres repo that the moved tests seemed not to be integrated yet.

TLDR:
So currently I'm still finding a good way to understand the issue better and what is important for a good implementation.
Because hooks always require the interaction with adapters its tricky for me to find a good way of debugging things and understanding stuff better. I'm on it, its a grind (a fun one though). ☺️

@jan-wolos-payu-gpo
Copy link

Hey @bjarneschroeder, any luck with some progress on this? :)

@bjarneschroeder
Copy link

Hey @jwolos I started a new job a few weeks ago which keeps me very busy. I unfortunately do not really have the time to work on this at the moment. Sorry!

@will-sargent-dbtlabs
Copy link

Upvoting this as a request from several active customers

@AlexanderStephenson
Copy link

We are using Snowflake, and for our environment the default warehouse is a small. This is fast enough to run 95% of our models. Where we have a a model where the number of rows pushes the limit of the warehouse we have used the config snowflake_warehouse with a macro get_warehouse to set a larger warehouse. Using the macro we can set different warehouses for each of the different environments the model is being built in. Dev, UAT, Production.

What we are experiencing now is the model is being successfully built with an XLarge warehouse, but the Tests are failing as these are defaulting back to the environments default warehouse which is a small. The test is a unique test on a table with 14b rows. So it is not complex SQL, just a lot of data.

@dbeatty10
Copy link
Contributor

@AlexanderStephenson beginning in dbt-core v1.9 (currently in beta), you can do something like this to configure snowflake_warehouse for your models and data tests:

models:
  - name: my_model
    config:
      snowflake_warehouse: something
    columns:
      - name: id
        tests:
          - accepted_values:
              values: [2]
              config:
                severity: warn
                snowflake_warehouse: something_else

Wanna give that a shot and see if it works for you?

@dbeatty10
Copy link
Contributor

Closing as resolved by dbt-labs/dbt-core#10767.

Please open an issue here if you try this in dbt Core v1.9+ and it doesn't work for you.

mikealfare pushed a commit that referenced this issue Dec 2, 2024
Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help_wanted Extra attention is needed paper_cut A small change that impacts lots of users in their day-to-day
Projects
None yet
Development

No branches or pull requests