-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Spark Connect (SQL models) #899
base: main
Are you sure you want to change the base?
Conversation
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Vakaris.
|
1 similar comment
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Vakaris.
|
setup.py
Outdated
@@ -59,7 +59,16 @@ def _get_dbt_core_version(): | |||
"thrift>=0.11.0,<0.17.0", | |||
] | |||
session_extras = ["pyspark>=3.0.0,<4.0.0"] | |||
all_extras = odbc_extras + pyhive_extras + session_extras | |||
connect_extras = [ | |||
"pyspark==3.5.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we support pyspark>=3.4.0,<4
, or at least pyspark>=3.5.0,<4
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pyspark>=3.5.0,<4
added.
3.4.0 connect module has an issue where temporary views are not shared between queries. If one dbt query creates a temp view, another query cannot see it. Can't find a spark issue # now
Seeing as there is some recent activity on Issue #814, and knowing that there are at least a couple of people actively using this fork, I've updated it. Looking forward for any insights regarding the implementation, as well as the likelihood of this pr getting merged. |
partially resolves #814
docs dbt-labs/docs.getdbt.com/#
Problem
dbt-spark has limited options for open-source Spark integrations. Currently, the only available method to run dbt with open-source Spark in production is through a Thrift connection. However, a Thrift connection isn't suitable for all use cases. For instance, it doesn't support thrift over HTTP. Also, the PyHive project, that dbt thrift relies on, is unsupported (at least according to their GitHub page).
Solution
Propose introducing support for Spark Connect (for SQL models only).
Checklist
How to test locally?
./start-connect-server.sh --packages org.apache.spark:spark-connect_2.12:3.5.0 --conf spark.sql.catalogImplementation=hive
Known issues: #901