-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CT-701] Store test pass and failures using --store-failures flag #5313
Comments
@brittianwarner Thanks for reopening here! Agree that this is the right place to continue the conversation. For the specific use case you've outlined:
I think leveraging dbt's metadata ( We've talked about the idea before of a centralized table, containing all that test metadata (linked in #4624). My preference remains to see this information made available via metadata, possibly synced from a dbt metadata service into the analytical database. In the meantime, I know that some folks have taken matters into their own hands. It is the current behavior of dbt to create tables for all tests with
In the general case, an empty table (zero rows) means pass, a table with nonzero rows means failure. But this doesn't account for tests with alternative My inclination is to:
Am I missing any important pieces of the puzzle? |
Hey @jtcohen6 The approach we've taken for now is the following:
{% macro
create or replace temporary table {{ a_target_table_for_the_results_of_the_current_test_in_the_current_execution }}
{{ main_sql }};
insert into {{ central_table_for_persisting_tests }}
select {{ columns_transformations_to_conform_test_results }}
from {{ a_target_table_for_the_results_of_the_current_test_in_the_current_execution }};
select
{{ fail_calc }} as failures,
{{ fail_calc }} {{ warn_if }} as should_warn,
{{ fail_calc }} {{ error_if }} as should_error
from (
select *
from {{ a_target_table_for_the_results_of_the_current_test_in_the_current_execution }}
where __dbt_test_passed = false
{{ "limit " ~ limit if limit != none }}
) dbt_internal_test
{% endmacro %}
While working on this, we felt that a potential override in https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/materializations/tests/test.sql would have been slightly better since we would want to synchronise the work that dbt is doing in dbt-core/core/dbt/include/global_project/macros/materializations/tests/test.sql Lines 5 to 33 in 1cfc085
get_test_sql is too aggressive in expecting only failures to be reported out of the main_sql , so we need to work around that.
Is there any possibility to expose some configuration that can be passed down to
If this was the case, then the override may be limited to only ensuring that tests report all that data back, and then handling the persistence of failing/passing/both type of records in the test materialization. Some additional notesThe way that dbt is currently set-up, no pre-handling/post-handling of the test results in the Hence any type of SQL statement that is prepended, for example, in the |
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please comment on the issue or else it will be closed in 7 days. |
Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest. Just add a comment to notify the maintainers. |
Is this your first time opening an issue?
Describe the Feature
As a dbt user, I would like to store all tests including passes and failures. As part of a DQ reporting initiative, we want to be able to see percentage of successful tests compared to failures. We know we could possible customize the output to do this but am wondering if this can be built into the
--store-failures
flag and possibly a field in the created table for status (pass vs failure).Describe alternatives you've considered
Create a service that reads dbt output to store test results.
Who will this benefit?
Anyone reporting on data quality in a regulated environment.
Are you interested in contributing this feature?
Yes the best I can.
Anything else?
This is more for individuals who have a requirement to report on DQ metrics. It seems like most of the functionality is in place, however it is not clear on whether we can store successful tests along with failures. Please, advise if I am missing something.
The text was updated successfully, but these errors were encountered: