Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] - 3.32.0+ - Helm Chart generates 2 'affinity' blocks in the same YAML 'spec' block #156

Open
talron23 opened this issue Nov 19, 2024 · 3 comments

Comments

@talron23
Copy link

Hey,

I tested and this issue appears on both 3.32.0 and 3.33.1, which looks like because you have added another nodeAffinity rule for the daemonsets.

The problem is when we set the platform to eks:

Then, it wants to add such block to the daemonset yaml:

      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions: 
                - key: eks.amazonaws.com/compute-type
                  operator: NotIn
                  values:
                    - fargate

However, your recent changes add this to the same block:

      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
                      - amd64

So here how the relevant YAML looks:

# Source: cloudguard/templates/imagescan/daemon/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: release-name-imagescan-daemon
  namespace: argocd
  labels:
    helm.sh/chart: cloudguard-2.33.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: 2.33.1
    app.created.by.template: "true"
    app.kubernetes.io/name: release-name-imagescan-daemon
    app.kubernetes.io/instance: release-name
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: release-name-imagescan-daemon
      app.kubernetes.io/instance: release-name
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 50%
  template:
    metadata:
      annotations:
        checksum/cgsecret:
        checksum/config:
        checksum/regsecret:
      labels:
        helm.sh/chart: cloudguard-2.33.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/version: 2.33.1
        app.created.by.template: "true"
        app.kubernetes.io/name: release-name-imagescan-daemon
        app.kubernetes.io/instance: release-name
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
                      - amd64
      priorityClassName: system-node-critical
      securityContext:
        runAsUser: 17112
        runAsGroup: 17112
        seccompProfile:
          type: RuntimeDefault
      serviceAccountName: release-name-imagescan-daemon
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions: 
                - key: eks.amazonaws.com/compute-type
                  operator: NotIn
                  values:
                    - fargate
      

We can see it happen in those 2 templates:

  • cloudguard/templates/imagescan/daemon/daemonset.yaml
  • cloudguard/templates/flowlogs/daemon/daemonset.yaml

To replicate it, you need to enable those addons and set the platform to eks in the values.yaml:

platform: eks
addons:
  flowLogs:
    enabled: true

  imageScan:
    enabled: true

Looks like it's because the addition of the "affinity:" in the template itself, though it's also coming from the _helpers.tpl "common.pod.properties" function :

      affinity:
{{ include "common.node.affinity.multiarch" $config | indent 8 }}
{{ include "common.pod.properties" $config | indent 6 }}

Tried to see if I can fix it in the relevant templates but it's related to the common.pod.properties function which is used a lot across the chart.

@chkp-rigor
Copy link
Contributor

Hi @talron23,
Let me see if I got this correctly: you see that affinities are not merged, thus only the last one takes place. I assume you didn't define custom affinity, did you?
We will check it and update

@talron23
Copy link
Author

Yes, affinities are not merged and we use kustomize and it fails to parse it, this is the error from kustomize:
Error: map[string]interface {}(nil): yaml: unmarshal errors:
line 57: mapping key "affinity" already defined at line 38

No custom affinity.

@chkp-rigor
Copy link
Contributor

I see. I will update on our progress.
It's weird kustomize doesn't ignore this, like K8s does

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants