-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cluster fails to start if mutiple control plane nodes are added. #3680
Comments
@pacoxu didn't we wait for sync to happen before promote? |
I added following arguments to the configuration file
the kubelet fails to start with another error message
|
maybe an ulimit problem: |
no ulimit issue if I only add one control-plane node to the cluster. |
This is a lot of nodes, do you need them? for what purpose? most development should prefer single node clusters. each node consumes resources from the host and unlike a "real" cluster adding more nodes does not actually add more resources (only falsely), you are almost certainly hitting resource limits on the host (see the known-issues doc re: inotify above, though this may not be the only limit you're hitting) |
What happened:
the cluster is failed to create if I add mutiple control plane nodes to the cluster
Error logs
What you expected to happen:
kind supports to create a kubernetes cluster with mutiple control plane nodes.
How to reproduce it (as minimally and precisely as possible):
use following configuration to launch a cluster
Anything else we need to know?:
Environment:
kind version 0.23.0
docker info
,podman info
ornerdctl info
):Client:
Version: 25.0.3
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.0.0+unknown
Path: /usr/libexec/docker/cli-plugins/docker-buildx
Server:
Containers: 21
Running: 0
Paused: 0
Stopped: 21
Images: 78
Server Version: 25.0.3
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 64b8a811b07ba6288238eefc14d898ee0b5b99ba
runc version: 4bccb38cc9cf198d52bebf2b3a90cd14e7af8c06
init version: de40ad0
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.1.94-99.176.amzn2023.x86_64
Operating System: Amazon Linux 2023.5.20240701
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.629GiB
Name: ip-172-31-18-230.eu-west-1.compute.internal
ID: c3b0373c-7367-45d1-8e7b-12a0ff695616
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
binglj.people.aws.dev:443
127.0.0.0/8
Live Restore Enabled: false
/etc/os-release
):Amazon Linux 2023
kubectl version
):1.30.0
The text was updated successfully, but these errors were encountered: