-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Access denied to clusters that don't have Unity Catalog enabled #175
Comments
@arpitjasa-db here is what i did. First i created a Microsoft Entra ID access token.
Added a new profile with this token and then Which returned:
Update: it is something related to my terraform. I created a service principal manually and it worked. The only thing i can notice a difference is the terraform created sps did not have User.Read API permissions. Is there any way you could update https://github.com/databricks/terraform-databricks-mlops-azure-infrastructure-with-sp-creation? I adapted it since it was a little outdated but clearly I must be missing something |
Yeah we set those manually in the module. The module is just Terraform code so you can adapt it yourself, but we have plans to deprecate these modules anyway since we don't really have much support for them anymore |
Ok it was in fact something related to my terraform, I was missing a managed application in local directory for each service principal. Now I'm getting a new error when running
This happens when logging the model. The cluster created to run the job has access mode set to Custom by default which do not have access to UC. I can't seem to find how to change this access mode in the new_cluster job settings Edit: Added
to the new_cluster settings but I'm getting a different error:
|
Yeah this is similar to #173 where the type of UC cluster used creates some confusion, and generally single-user mode is the most feature-compatible, but the permissions around it creates some issues. When you got the above error, were you running the cluster as the SP and the owner of the cluster is also the SP? |
The cluster was created and ran as the SP, but it shows me as the owner |
If you try re-assigning the cluster owner to the SP, does this resolve the issue? It would also help to confirm that the SP has all the right permissions on the catalog/schema/model as well |
How can I achieve that? A new cluster is created each time the job is ran.
Unless I create a dedicated SP cluster, re-assign the cluster owner to the SP, and then use it instead of creating a new one each time |
I checked the details of the cluster associated with the job. It's on the automatically added tags. @arpitjasa-db I checked and i do also have that .bundle folder with the terraform state within the SP directory |
Could you confirm the Terraform state deploys as expected? And who it specifies as the creator/what policy is attached? |
This is what I found regarding the model training job. The Terraform state does not mention a creator anywhere. The |
@sw33zy can you confirm that the value for |
Yes, it is the same. When I omit it it doesn't change anything, I get the same |
Hmm we're not able to reproduce this error on our end, let's try to isolate what part is causing this issue. Could you try running the job manually from the UI? And if that has the same error, could you try running the notebook directly (maybe with Serverless or a different UC cluster)? |
Hello,
I was following the CI/CD setup guide, I set up the service principals, assigned them to each workspace and set up the secrets on the repo. I am getting this error on the CI tests. I can't seem to understand why this is happening? Could it be a permission issue? The service principals were created using an adaptation of https://github.com/databricks/terraform-databricks-mlops-azure-infrastructure-with-sp-creation
The text was updated successfully, but these errors were encountered: