-
Notifications
You must be signed in to change notification settings - Fork 660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core feature] Add PodTemplate support for Spark driver and executor pods #4105
Comments
This looks like a really valuable feature. It looks like the backend stuff on flyte propeller has been completed #4183. Am I right in thinking there is still a change required to |
@Tom-Newton actually the final design drifted from this original proposal and doesn't require a flytekit change. Instead I made use of the existing singular |
Thanks for explaining. Probably I should have read the PR description on #4183. I think it would definitely be very useful to have separate pod templates for driver and executor. We always use use spot nodes usually of a different node type for executors. |
I think there is also a small issue with environment variables that reference secrets. I created a github issue and I have a fix I can contribute. |
#take |
Motivation: Why do you think this is important?
Today Spark driver and executor pods are not configurable on a per task basis; they use the shared propeller k8s plugin config. This disallows, for instance, setting additional tolerations when running a Spark job.
This extends work PodTemplate investments made in
Goal: What should the final outcome look like, ideally?
flytekit should accept optional driver and executor pod templates in Spark task config
The Spark plugin should use these optional templates and existing ToK8sPodSpec logic to populate the SparkApplicationSpec.
Describe alternatives you've considered
The Spark on K8s operator supports a bunch of k8s pod settings via
spark_conf
, which is already available in flytekit. While it doesn't support tolerations, for example, we could extend this concept with the addition of keys likespark.kubernetes.driver.tolerations
and either get support added to Spark on K8s operator or parse out separately in flyteplugins and apply to the SparkApplicationSpec when building it.Propose: Link/Inline OR Additional context
One solution here is to add driver and executor pod template parameters to the Spark dataclass and underlying SparkJob proto. Then in flyteplugins we can populate the driver and executor specs.
Are you sure this issue hasn't been raised already?
Have you read the Code of Conduct?
The text was updated successfully, but these errors were encountered: