Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multus pods always restart when the multus cni config has been specify #1366

Open
cyclinder opened this issue Dec 10, 2024 · 0 comments
Open

Comments

@cyclinder
Copy link
Contributor

What happend:

root@10-20-1-10:/home/cyclinder/multus4.0# kubectl get po -n kube-system
NAME                                           READY   STATUS             RESTARTS        AGE
calico-kube-controllers-86447f4445-pjhnz       1/1     Running            0               4d4h
calico-node-qc5cp                              1/1     Running            0               4d4h
calico-node-vn5bw                              1/1     Running            0               4d4h
coredns-76f75df574-9drmz                       1/1     Running            0               4d5h
coredns-76f75df574-xzkcz                       1/1     Running            0               4d5h
etcd-spider-control-plane                      1/1     Running            0               4d5h
kube-apiserver-spider-control-plane            1/1     Running            0               4d5h
kube-controller-manager-spider-control-plane   1/1     Running            0               4d5h
kube-multus-ds-jctrl                           0/1     CrashLoopBackOff   6 (2m26s ago)   8m18s
kube-multus-ds-jpgdr                           0/1     CrashLoopBackOff   6 (2m20s ago)   8m18s
kube-proxy-6csq8                               1/1     Running            0               4d5h
kube-proxy-z9dhd                               1/1     Running            0               4d5h
kube-scheduler-spider-control-plane            1/1     Running            0               4d5h
root@10-20-1-10:/home/cyclinder/multus4.0# kubectl logs -f -n kube-system kube-multus-ds-jctrl
Defaulted container "kube-multus" out of: kube-multus, install-multus-binary (init)
2024-12-10T08:55:01Z [verbose] multus-daemon started
2024-12-10T08:55:01Z [verbose] server configured with chroot: /hostroot
2024-12-10T08:55:01Z [verbose] Filtering pod watch for node "spider-worker"
2024-12-10T08:55:02Z [verbose] API readiness check
2024-12-10T08:55:02Z [verbose] API readiness check done!
2024-12-10T08:55:02Z [verbose] multus daemon is exited

What you expected to happen:

multus pods running well/

How to reproduce it (as minimally and precisely as possible):

~# kubectl apply -f multus.yaml
~# cat multus.yaml
# Note:
#   This deployment file is designed for 'quickstart' of multus, easy installation to test it,
#   hence this deployment yaml does not care about following things intentionally.
#     - various configuration options
#     - minor deployment scenario
#     - upgrade/update/uninstall scenario
#   Multus team understand users deployment scenarios are diverse, hence we do not cover
#   comprehensive deployment scenario. We expect that it is covered by each platform deployment.
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: network-attachment-definitions.k8s.cni.cncf.io
spec:
  group: k8s.cni.cncf.io
  scope: Namespaced
  names:
    plural: network-attachment-definitions
    singular: network-attachment-definition
    kind: NetworkAttachmentDefinition
    shortNames:
      - net-attach-def
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          description: 'NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing
            Working Group to express the intent for attaching pods to one or more logical or physical
            networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec'
          type: object
          properties:
            apiVersion:
              description: 'APIVersion defines the versioned schema of this represen
                tation of an object. Servers should convert recognized schemas to the
                latest internal value, and may reject unrecognized values. More info:
                https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
              type: string
            kind:
              description: 'Kind is a string value representing the REST resource this
                object represents. Servers may infer this from the endpoint the client
                submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
              type: string
            metadata:
              type: object
            spec:
              description: 'NetworkAttachmentDefinition spec defines the desired state of a network attachment'
              type: object
              properties:
                config:
                  description: 'NetworkAttachmentDefinition config is a JSON-formatted CNI configuration'
                  type: string
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multus
rules:
  - apiGroups: ["k8s.cni.cncf.io"]
    resources:
      - '*'
    verbs:
      - '*'
  - apiGroups:
      - ""
    resources:
      - pods
      - pods/status
    verbs:
      - get
      - list
      - update
      - watch
  - apiGroups:
      - ""
      - events.k8s.io
    resources:
      - events
    verbs:
      - create
      - patch
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: multus
subjects:
  - kind: ServiceAccount
    name: multus
    namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: multus
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: multus-daemon-config
  namespace: kube-system
  labels:
    tier: node
    app: multus
data:
  daemon-config.json: |
    {
        "chrootDir": "/hostroot",
        "cniVersion": "0.3.1",
        "logLevel": "verbose",
        "logToStderr": true,
        "cniConfigDir": "/host/etc/cni/net.d",
        "multusAutoconfigDir": "/host/etc/cni/net.d",
        "multusConfigFile": "/tmp/multus-conf/00-multus.conf",
        "socketDir": "/host/run/multus/"
    }
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: multus-cni-config
  namespace: kube-system
  labels:
    tier: node
    app: multus
data:
  # NOTE: If you'd prefer to manually apply a configuration file, you may create one here.
  # In the case you'd like to customize the Multus installation, you should change the arguments to the Multus pod
  # change the "args" line below from
  # - "--multus-conf-file=auto"
  # to:
  # "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
  # Additionally -- you should ensure that the name "70-multus.conf" is the alphabetically first name in the
  # /etc/cni/net.d/ directory on each node, otherwise, it will not be used by the Kubelet.
  cni-conf.json: |
    {
      "name": "multus-cni-network",
      "type": "multus",
      "capabilities": {
        "bandwidth": true,
        "portMappings": true
      },
      "confDir": "/etc/cni/net.d/",
      "clusterNetwork": "k8s-pod-network",
      "multusNamespace": "kube-system",
      "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      hostPID: true
      tolerations:
        - operator: Exists
          effect: NoSchedule
        - operator: Exists
          effect: NoExecute
      serviceAccountName: multus
      containers:
        - name: kube-multus
          image: ghcr.m.daocloud.io/k8snetworkplumbingwg/multus-cni:v4.1.3-thick
          command: [ "/usr/src/multus-cni/bin/multus-daemon" ]
          resources:
            requests:
              cpu: "100m"
              memory: "50Mi"
            limits:
              cpu: "100m"
              memory: "50Mi"
          securityContext:
            privileged: true
          terminationMessagePolicy: FallbackToLogsOnError
          volumeMounts:
            - name: cni
              mountPath: /host/etc/cni/net.d
            # multus-daemon expects that cnibin path must be identical between pod and container host.
            # e.g. if the cni bin is in '/opt/cni/bin' on the container host side, then it should be mount to '/opt/cni/bin' in multus-daemon,
            # not to any other directory, like '/opt/bin' or '/usr/bin'.
            - name: cnibin
              mountPath: /opt/cni/bin
            - name: host-run
              mountPath: /host/run
            - name: host-var-lib-cni-multus
              mountPath: /var/lib/cni/multus
            - name: host-var-lib-kubelet
              mountPath: /var/lib/kubelet
              mountPropagation: HostToContainer
            - name: host-run-k8s-cni-cncf-io
              mountPath: /run/k8s.cni.cncf.io
            - name: host-run-netns
              mountPath: /run/netns
              mountPropagation: HostToContainer
            - name: multus-daemon-config
              mountPath: /etc/cni/net.d/multus.d
              readOnly: true
            - name: hostroot
              mountPath: /hostroot
              mountPropagation: HostToContainer
            - name: multus-cni-config
              mountPath: /tmp/multus-conf
          env:
            - name: MULTUS_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
      initContainers:
        - name: install-multus-binary
          image: ghcr.m.daocloud.io/k8snetworkplumbingwg/multus-cni:v4.1.3-thick
          command:
            - "cp"
            - "/usr/src/multus-cni/bin/multus-shim"
            - "/host/opt/cni/bin/multus-shim"
          resources:
            requests:
              cpu: "10m"
              memory: "15Mi"
          securityContext:
            privileged: true
          terminationMessagePolicy: FallbackToLogsOnError
          volumeMounts:
            - name: cnibin
              mountPath: /host/opt/cni/bin
              mountPropagation: Bidirectional
      terminationGracePeriodSeconds: 10
      volumes:
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /opt/cni/bin
        - name: hostroot
          hostPath:
            path: /
        - name: multus-daemon-config
          configMap:
            name: multus-daemon-config
            items:
            - key: daemon-config.json
              path: daemon-config.json
        - name: multus-cni-config
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 00-multus.conf
        - name: host-run
          hostPath:
            path: /run
        - name: host-var-lib-cni-multus
          hostPath:
            path: /var/lib/cni/multus
        - name: host-var-lib-kubelet
          hostPath:
            path: /var/lib/kubelet
        - name: host-run-k8s-cni-cncf-io
          hostPath:
            path: /run/k8s.cni.cncf.io
        - name: host-run-netns
          hostPath:
            path: /run/netns/

Anything else we need to know?:

Environment:

  • Multus version
    image path and image ID (from 'docker images')
  • Kubernetes version (use kubectl version):
  • Primary CNI for Kubernetes cluster:
  • OS (e.g. from /etc/os-release):
  • File of '/etc/cni/net.d/'
  • File of '/etc/cni/multus/net.d'
  • NetworkAttachment info (use kubectl get net-attach-def -o yaml)
  • Target pod yaml info (with annotation, use kubectl get pod <podname> -o yaml)
  • Other log outputs (if you use multus logging)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant