Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PV nodeAffinity.required.nodeSelectorTerms matchFields does not result in the kube scheduler placing pods on the same node as the PV #451

Closed
kmurray01 opened this issue Sep 4, 2024 · 7 comments
Labels

Comments

@kmurray01
Copy link

Observing this on the latest v0.0.29 release and master-head local-path-provisoner on K3s v1.30.4+k3s1.

PVs created with the new local-path-provisoner image have an updated nodeAffinity.required.nodeSelectorTerms as depicted below

nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchFields:
        - key: metadata.name
          operator: In
          values:
          - my-agent-host.example.com

This change originated from PR #414, that was subsequently included into the latest v0.0.29 release and master-head.

Deploying a pod specifying a persistentVolumeClaim to an associated pvc, the pod is scheduled on a different node, not my-agent-host.example.com. That pod then fails to initialize as it's unable to mount the PV volume path on my-agent-host.example.com.

Previously on v0.0.28, the PV nodeAffinity.required.nodeSelectorTerms is as below and this works. The kube scheduler places the pod on the same node on which the PV local path volume is created, i.e. my-agent-host.example.com in this example.

nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - my-agent-host.example.com

It would seem switching the nodeAffinity.required.nodeSelectorTerms to matchFields for node field metadata.name does not work or the kube scheduler does not comply with that nodeAffinity.

Also it's important to highlight on the K3s node, the value of metadata.name matches my-agent-host.example.com.

@kmurray01
Copy link
Author

@derekbit @jan-g if you could please triage

@haojingcn
Copy link

haojingcn commented Sep 26, 2024

yes, i met same issue.

sts with volumePersistentTemplate defined, generate the pv yaml like
image

metadata.name is not nested label in K8s(1.19-1.26), so node would not have the label, then pod schedule will meet the error "had volume node affinity conflict"

@huangguoqiang
Copy link

i met same issue.

@spirkaa
Copy link

spirkaa commented Oct 1, 2024

Same problem. After draining node pod will be scheduled on another node without PV resulting in crashloop or another incorrect behavior of app inside.

@derekbit
Copy link
Member

derekbit commented Oct 1, 2024

Sorry for the delayed response. I'm on vacation this week. I plan to release 0.0.30 with a fix for the issue next week.

Copy link

github-actions bot commented Dec 1, 2024

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale label Dec 1, 2024
Copy link

github-actions bot commented Dec 7, 2024

This issue was closed because it has been stalled for 5 days with no activity.

@github-actions github-actions bot closed this as completed Dec 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants