-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(feat): add tls to api server #656
Conversation
A new image has been built to help with testing out this PR: To use this image run the following: cd $(mktemp -d)
git clone [email protected]:opendatahub-io/data-science-pipelines-operator.git
cd data-science-pipelines-operator/
git fetch origin pull/656/head
git checkout -b pullrequest 70dba66fba5a22f6c2848554c523d84e071dd0e3
oc new-project opendatahub
make deploy IMG="quay.io/opendatahub/data-science-pipelines-operator:pr-656" More instructions here on how to deploy and test a Data Science Pipelines Application. |
Change to PR detected. A new PR build was completed. |
Change to PR detected. A new PR build was completed. |
Change to PR detected. A new PR build was completed. |
Change to PR detected. A new PR build was completed. |
Change to PR detected. A new PR build was completed. |
Deployed DSPO, created a DSPA using below yaml file.
created a pipeline run and it failed with the below error:
Other tests:
|
Change to PR detected. A new PR build was completed. |
Change to PR detected. A new PR build was completed. |
Tested as per the instructions, first 7 tests worked as expected. Pipeline run fails with the attached error. |
Change to PR detected. A new PR build was completed. |
.../bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yaml
Outdated
Show resolved
Hide resolved
don't see this anywhere yet -- just making sure I'm not missing it edit: it might be ConfigureCABundle, but I'm not seeing how that bundle makes it into driver and launcher edit2: ok yep I see the calls to ConfigureCABundle for driver and executor in both container.go and dag.go |
sorry didn't follow up, I found that this was in fact not needed so the comment is irrelevant |
Change to PR detected. A new PR build was completed. |
/lgtm I can tag this with approve once @VaniHaripriya verifies it. |
Change to PR detected. A new PR build was completed. |
Change to PR detected. A new PR build was completed. |
Change to PR detected. A new PR build was completed. |
I was wrong about being wrong here, I've added a new commit that resolves this issue. To sum it up, we need to add the service-ca cabundles to our custom ca (that gets mounted to apiserver, driver, launcher), we get this from another configmap, relevant code additions are in params here: ffdd9ad#diff-5fd0ead87d572dec71ec0194647f49e295c220aa3798b993d75e3b491f64a17a the rest of the changes in the commit are additional test cases. |
Tested both the scenarios (podtopodtls = false and podtopodtls = true) everything works as expected. |
* add openshift ingress cabundle to pa/apiserver/ui * add ui tls enabled kfp server client Signed-off-by: Humair Khan <[email protected]>
Signed-off-by: Humair Khan <[email protected]>
Change to PR detected. A new PR build was completed. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: gregsheremeta The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The issue resolved by this Pull Request:
Resolves: https://issues.redhat.com/browse/RHOAIENG-4968
Depends on: opendatahub-io/data-science-pipelines#40
Description of your changes:
Add new component-wide level tls enablement field to DSPA:
PodToPodTLS
, which is enabled by default and ensures that API server serves over TLS. Also updates PA and kfp UI, as well as OAUTH Server in apiserver pod to be able to recognize the CA Bundle.One potentially controversial decision here is the choice to have the OAUTH Proxy side car communicate via svc dns name instead of localhost, the reason for this is because the certs used to serve tls for api server are leveraged from openshift service signer, and these only recognize subjects for the service name and not localhost. My assumption here is that it shouldn't matter and kube proxy / network optimizations here do not let the intra pod traffic leave the pod's worker node.
Another thing to note, the PA deployment is updated recognize certs from 2 paths now, and because we use rhel containers, the rhel ssl default cert path is recognized here.
Other changes are to the health check, to allow for polling based on transfer protocol.
Testing instructions
instructions.md
When
podToPodTLS: false
these omit tls mentions. Note that whenpodToPodTLS
is not specified in the DSPA we see https and tls grpc enabled by default.Note the https. Attempt to use http, you should get an error response.
You should see "Client sent an HTTP request to an HTTPS server."
This confirms that the api http server container is now behind a tls cert.
Try again by disabling tls verification:
This should return
{}
if you have no runs in this DSP.kube-root-ca.crt
to curl via--ca-cert
and try again, attempt to list runs via the route, confirm this works:podToPodTLS=false
, confirm dsp works as beforeChecklist