Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix creation of OVA server pod under enforced restricted namespaces #660

Merged
merged 1 commit into from
Nov 23, 2023

Conversation

bkhizgiy
Copy link
Member

@bkhizgiy bkhizgiy commented Nov 23, 2023

When a new namespace is created in OpenShift, it is automatically labeled with pod-security.kubernetes.io/audit, which refers to a secondary, restricted security level. The situation diffrent in Kubernetes and for the default namespace, where such automatic labeling does not occur. When a namespace is marked as restricted, an additional label, pod-security.kubernetes.io/enforce, is added. This label represents the highest security level, where violations are not permitted. When deploying an OVA server pod, it is created with standard settings across all namespaces, except those marked only with the enforce option. In these cases, extra configuration is required to eliminate any potential security violations.

Copy link
Member

@liranr23 liranr23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you check other pods we create(virt-v2v, populators)? In my opinion they work in the default namespace. I don't remember we ever checked the restriction level. Can the OVA provider work with the additional restriction here without checking the namespace?

pkg/controller/provider/ova-setup.go Outdated Show resolved Hide resolved
pkg/controller/provider/ova-setup.go Outdated Show resolved Hide resolved
pkg/controller/provider/ova-setup.go Outdated Show resolved Hide resolved
pkg/controller/provider/ova-setup.go Outdated Show resolved Hide resolved
pkg/controller/provider/ova-setup.go Outdated Show resolved Hide resolved
@bkhizgiy
Copy link
Member Author

After testing it with virt-v2v pod its working without the extra configuration, but it seams that the difference relays on the fact the ova server runs as a deployment while the virt-v2v created directly as a pod. for the deployment in the highest security level there is extra enforcement for the user that can run the pod while for the others its not required and determine automatically.

Copy link

sonarcloud bot commented Nov 23, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.0% 0.0% Duplication

@liranr23
Copy link
Member

After testing it with virt-v2v pod its working without the extra configuration, but it seams that the difference relays on the fact the ova server runs as a deployment while the virt-v2v created directly as a pod. for the deployment in the highest security level there is extra enforcement for the user that can run the pod while for the others its not required and determine automatically.

so, can we do it as restricted all the time on any namespace?

@bkhizgiy
Copy link
Member Author

bkhizgiy commented Nov 23, 2023

After testing it with virt-v2v pod its working without the extra configuration, but it seams that the difference relays on the fact the ova server runs as a deployment while the virt-v2v created directly as a pod. for the deployment in the highest security level there is extra enforcement for the user that can run the pod while for the others its not required and determine automatically.

so, can we do it as restricted all the time on any namespace?

No, for the audit security level when specifying the user + not root the pod creation being blocked, plus I've tried to specify what is the the permitted UID range as part of the controller SCC but its allowed for use only when setting the security to enforce (so same behavior).

@liranr23 liranr23 merged commit 498a42b into kubev2v:main Nov 23, 2023
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants