Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] panic: runtime error: slice bounds out of range [:-1] - segfault when using ingress controller #113

Open
MrBlaise opened this issue Dec 18, 2024 · 0 comments

Comments

@MrBlaise
Copy link

MrBlaise commented Dec 18, 2024

I am using haproxy ingress controller and I observed that during heavy load when the backend pods are restarted haproxy can sometime fail with a segmentation fault. Based on the stacktrace, the it roots from the client-native library, specifically this line: https://github.com/haproxytech/client-native/blob/master/runtime/runtime_client.go#L165

Stacktrace:

[NOTICE]   (69) : haproxy version is 3.1.0-f2b9791
[ALERT]    (69) : Current worker (4635) exited with code 139 (Segmentation fault)
[WARNING]  (69) : A worker process unexpectedly died and this can only be explained by a bug in haproxy or its dependencies.
Please check that you are running an up to date and maintained version of haproxy and open a bug report.
[ALERT]    (69) : exit-on-failure: killing every processes with SIGTERM
HAProxy version 3.1.0-f2b9791 2024/11/26 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2026.
Known bugs: http://www.haproxy.org/bugs/bugs-3.1.0.html
Running on: Linux 6.1.0-27-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.115-1 (2024-11-01) x86_64
[WARNING]  (69) : Former worker (4615) exited with code 143 (Terminated)
[WARNING]  (69) : All workers exited. Exiting... (139)
panic: runtime error: slice bounds out of range [:-1]

goroutine 444 [running]:
github.com/haproxytech/client-native/v5/runtime.(*client).Reload(0x14?)
	/go/pkg/mod/github.com/haproxytech/client-native/[email protected]/runtime/runtime_client.go:166 +0x2ba
github.com/haproxytech/kubernetes-ingress/pkg/haproxy/process.(*s6Control).Service(0xc001dec008, {0x263f416?, 0x8?})
	/src/pkg/haproxy/process/s6-overlay.go:61 +0xd5
github.com/haproxytech/kubernetes-ingress/pkg/controller.(*HAProxyController).updateHAProxy(0xc000de7808)
	/src/pkg/controller/controller.go:204 +0xa58
github.com/haproxytech/kubernetes-ingress/pkg/controller.(*HAProxyController).SyncData(0xc000de7808)
	/src/pkg/controller/monitor.go:38 +0x5b2
github.com/haproxytech/kubernetes-ingress/pkg/controller.(*HAProxyController).Start(0xc000de7808)
	/src/pkg/controller/controller.go:100 +0x209
created by main.main in goroutine 1
	/src/main.go:164 +0xe65
Ingress Controller exited with fatal code 2, taking down the S6 supervision tree

https://github.com/haproxytech/client-native/blob/master/runtime/runtime_client.go#L165

	output, err := c.runtime.ExecuteMaster("reload")
	if err != nil {
		return "", fmt.Errorf("cannot reload: %w", err)
	}
	parts := strings.SplitN(output, "\n--\n", 2)
	if len(parts) == 1 {
		// No startup logs. This happens when HAProxy is compiled without USE_SHM_OPEN.
		status = output[:len(output)-1]
	} else {
		status, logs = parts[0], parts[1]
	}

I believe the root cause here is that output is never checked whether it is empty, and thus, this status = output[:len(output)-1] can become this status = output[:-1], causing the panic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant