-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging? #3
Comments
This is the only solution that came to mind for me as well. @bjaglin was this your final thought on the matter or did you find an alternative? |
Unless I'm mistaken, this container currently doesn't log anything. I prefer to have each container log to stdout, then I can choose to send that on to a logging container if I want to. The (hackish) way I ended up doing it is here: https://gist.github.com/nicot/6c680c626156f842444f |
Just start the container and mount /dev/log from the host into the container. With an haproxy config that logs to /dev/log it works fine except that systemd-journald doesn't associate it with the haproxy unit (if you're using systemd units to control it). docker run -p 80:80 -v /dev/log:/dev/log haproxy haproxy.cfg: |
Any chance we'll see this implemented? |
Docker already has its own docker logs facility, and administrators expect things to show up there. Therefore, /dev/log seems like a poor replacement. Any chance proper logging to stdout could be added? |
I tried a complicated workaround with rsyslogd in the container, but while it seems to work more or less (I'm getting regular log messages, at least some of them) I'm running into this message quite often:
The suggested solution here http://comments.gmane.org/gmane.comp.web.haproxy/4716 isn't really useful because it suggests changing a kernel option on the host which shouldn't be necessary to run a docker container properly. Therefore it would be really really helpful if proper stdout logging could be added to haproxy itself.. |
See also moby/moby#13726 for minor enlightenment. One option is to supply remote syslog endpoints per container (marking up the messages with source host as appropriate) - logstash with multiple syslog inputs may be useful. This might even be automated. It seems reasonable to assume anyone using this container will need to supply their own |
Do I understand this correctly that we should again manually add some daemon for logging things in the container/image? Proper logging isn't some sort of "optional" feature, therefore I think documentation isn't the solution here and the container should really already have this integrated. As a side note, just redirecting everything else from docker to the systemd logs as well seems weird, since while then everything would be in one place it's kind of an odd solution because just one single container isn't really configured to use the docker logging infrastructure properly. It seems like the wrong end to address this.. However, maybe it might be preferable/easiest to convince the haproxy folks to simply add stdout logging instead of attempting all those logging daemon workarounds.. |
+1 for stdout/stderr logging - so tools based on Docker API like logspout or sematext-docker-agent could get the logs from the Docker API, and it works one with syslog or any other log driver. The user could specify the log driver settings and forward logs to dedicated logging services. |
@Jonast and @megastef you need to argue this with the haproxy authors. This container is merely a wrapper around what they ship. Their product logs to The container could ship with a |
+1 for shipping with a syslog daemon, if that means that In all honesty I know basically nothing about syslog, and the thought of managing my logs out-of-band with all my other docker services makes me a bit uncomfortable. I can imagine there are many who are in my same boat. Maybe there could be tags with and without an embedded syslog? |
any update on this? how to do logging properly? |
Run Once you move beyond managing a single host, Remember many applications log limited messages to Those wondering how to disambiguate multiple identical applications on the same host, what you really want is for that application to include a symbolic name (from an environment variable, for instance) in it's messages. This has nothing to do with Docker. |
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#log-send-hostname also lets you specify the hostname to be sent. |
Note that the haproxy:alpine image already has a syslogd. I was able to get logging to stdout by using the following in docker-compose.yml: command: /bin/sh -c "/sbin/syslogd -O /dev/stdout && haproxy -f /usr/local/etc/haproxy/haproxy.cfg a similar command should work from a dockerfile as well |
@dack neat! I wonder if this could be made the default for the haproxy image? |
I'm currently using a simple syslog-ng sidecar container to log from haproxy. I wouldn't want there to be a default syslog option. Haproxy has intentionally chosen not to log through stdout and using a sidecar makes it easy to respect that. |
Why not? There could be a trivial ENV var setting added to turn it off, and just because you use a special setup doesn't mean the container shouldn't be in a default working state. |
@Jonast isn't that simply shifting the problem? People installing haproxy in non-Docker environments simply configure it to talk to their existing syslog infrastructure. I wouldn't want to launch it in a Docker container then find out I need to override the supplied syslog to log to my existing syslog infrastructure. Docker can direct container output (STDOUT/STDERR) to a given syslog instance already, I guess "they" expect you to be running containers in a hosting environment that already has adequate syslog services for your scenario. If a container shipped with a syslog receiver for logging purposes, surely it should be opt-in at run-time, not opt-out. |
@jmkgreen We don't need to change anything for a non-docker environment. Just change the Dockerfile to make haproxy log to stdout/stderr. Ideally haproxy would have a CLI or config file option that could be used for this, but it does not. That's why I've run syslogd in the container - purely to get stdout/stderr logging (which should be the default for any docker container). |
@jmkgreen docker simply expects stdout/stderr logging per default and integrates this into the built-in docker logs functionality. That is how the docker universe works (edit: as far as I can tell! Maybe I have been doing it all wrong? But that's how I have encountered it for 99% of the containers I have seen), at least for now. I didn't make the rules. As a result I don't think it shouldn't be opt-in, because an opt-out works just as nicely and IMHO it is better to stick to the standard behavior of a docker container (which is "make the application log to stdout" - which generally doesn't require syslogd at all, that's just an haproxy special case) and not to something you consider to be superior but which nobody else follows with their default behavior, unless there is a very good reason. However, I bet a minimal syslog-ng doesn't eat much resources, and if you add an opt-out ENV var that prevents it from launching at all if people don't want it, there is literally no performance impact in any way for anyone who doesn't want it. So there is no good reason IMHO. Therefore, I really think it would be preferable to adapt the default behavior to match everyone else's containers, and if someone wants high-performance logging directly to the host's syslog, just add ENV var options to make it happen and everyone can use what they want with the default behavior matching every docker user's expectations. |
@Jonast This is the message from Willy I was alluding to earlier: https://www.mail-archive.com/[email protected]/msg17436.html If the author of the software we're wrapping in a container has explicitly chosen not to support logging to stdout, I don't think we should hack in syslog in the container to transform it to a stdout stream. I'd rather keep it in some kind of syslog all the way to my log aggregator unless we know that the way docker handles stdout is somehow faster than the 'normal' scenario. This is exactly the sort of scenario that the sidecar docker design pattern is good for if you don't have access or don't want to use a system syslog. |
2 years later: nope, no logging to stdout. haproxy is already falling short by not having a native integration with KV backends for service discovery. |
I don't really see what the argument is against my solution. The current situation:
With my solution:
There is zero impact on anyone who wants to ditch the docker logging and use pure syslog instead. For everyone else, they get standard docker logging instead of nothing at all. Seems like a win/win to me. |
@dack the performance impact? @ryansch explained this and linked to a further explanation by Willy, the haproxy author a couple of months ago. |
Without benchmarks I don't really buy that argument. Furthermore, the users who would most benefit from stdout-by-default probably won't be operating at a scale where it would be an issue. |
@PriceChild With the method I proposed, it's actually run through syslogd. So haproxy is not directly writing to stdout and would not have to wait for any stdout buffering (as everything is buffered by syslogd, not haproxy). |
Here's the stdout sidecar I use when running haproxy locally: |
Retaining a way to run with possibly faster non-stdout configuration isn't wrong. Summed up, I really don't get why you just don't bundle syslog-ng with an ENV switch to make the container not launch & use it at runtime for the people who don't want it. I've suggested this in a previous comment, and all you've said is basically "but for some people that's not the desired solution", for which the ENV switch is the entire point. I still don't get what the actual problem is? Unless the few megabytes to store syslog-ng or a few lines of script to handle the ENV var at launch is a huge problem, I still don't see a good reason not to add this feature... EDIT: and if that's still too much work, just use what @dack proposed and add a README section that documents how to easily use a one-liner to change the launch command for external syslogd logging as previously. It won't even be hard to do, everyone who wants it will be easily able to find and use it to get the old behavior - while the container will suddenly work out of the box with proper functionality per default. EDIT 2: just to spell this out more verbosely: I absolutely think using a sidecar approach is nice for advanced features. The only reason I think it's a bad idea here is because logging via docker logs is an absolute core feature that should "just work". There is nothing wrong with having optional other ways to do logging faster/better/... that can be easily enabled, but IMHO having it not enabled in the expected way per default purely for a performance improvement of unknown dimensions & politics (original software vendor doesn't really like that feature implemented that way) is a mistake, especially given the absolute minuscule impact on Dockerfile complexity, docker image size etc. for providing this in a more reasonable way |
Hey @dack I am using your solution for throwing haproxy logs to stdout, but somehow i am not getting the requests logs, just the startup ones :s, have you managed to get request logs also? |
@wichon I'm not personally using request logs, but my guess would be that the verbosity level of either haproxy or syslogd is too low to allow them through. Check that your haproxy config is set to log those messages, and try adding something like -l 7 to the syslogd command. |
@dack No luck :(, my bet is that syslogd is not listening in the UDP port 514, which is the one haproxy uses to send log data to syslog. |
@wichon Are you using the alpine-based haproxy container? If not, syslogd may require a totally different set of options/configuration. I have only tried my solution with Alpine. The busybox syslogd (as used in Alpine) listens on UDP 514 by default. You can see the CLI options here: https://busybox.net/downloads/BusyBox.html (search for syslogd on that page). |
@bjaglin hello, can you tell me how you do it ? |
@Jonast i also encounter this kind of situation : |
Starting rsyslogd appears to allow the logging to work but then the container doesn't responder to stop events and has to be killed. Is there a way to resolve that? |
@dack With your solution, haproxy is not pid 1, and therefore wouldn't receive signals passed via docker. See https://www.ctl.io/developers/blog/post/gracefully-stopping-docker-containers. |
You can use https://github.com/Yelp/dumb-init or https://github.com/krallin/tini to forward signals in a docker container. Edit: Or the |
FWIW for debug and development, I used the tips from @dack but had to make an adjustment (this may have been the issue @wichon saw). Using a Docker file
and with
I started up #!/bin/sh -ex
/sbin/syslogd -O /proc/1/fd/1 # <--- link to docker's stdout, not "your stdout"
haproxy -f /usr/local/etc/haproxy/haproxy.cfg -db # <--- stay in foreground Again, I'm using this mostly for debug and devel. You'd want something to forward signals, etc for a production (a mini init process such as https://github.com/krallin/tini , HT @ryansch ). Comments and thoughts welcome! n
|
I'd love to see benchmarks. It seems to me that "we will not provide this convenient feature because performance" does need to be supported by numbers, otherwise the convenience is more helpful. Also the nginx docker container (and thus nginx) logs to stdout/stderr just fine. It's insanely focussed on performance and can be used as a reverse proxy for multiple back-ends. Though I do want to use haproxy for the health checks (paid feature in nginx and I am working with a non-profit). |
Please make the official haproxy image to use docker's logging standards by default. It's a PITA not being able to tell what the hell is going on |
I'd also need a debug mode :D wouldn't it be an okay idea to have an haproxy image that has full debug mode enabled ? |
I am now using tini and a script like the one @client9 provided. This is a better workaround than my original one, as it handles signals properly. If you are using swarm services/stacks, then --init is not available as an option. However, you can just add tini via the Dockerfile like this:
|
Hello @dack |
Still nothing? |
I've also struggled for a bit to enable debug logging for http requests. Dockerfile Docker Compose: Docker run: instead of the default command found in the official dockerfile:
This will start Haproxy as PID 1 (as Docker recommends) in verbose debug mode and starts logging everything to stdout. Hope this helps you guys along, happy logging! |
The current haproxy docker image repo is at https://github.com/docker-library/ Related discussion happening on the docker-library/haproxy#39 PR. |
Please checkout solution I used for opendkim. It is one-liner and requires only
|
2019, still think it should be easier. Reading through 50 comments to turn on a 'switch' .. |
Haproxy 1.9 now supports stdout and stderr logging, please refer to the documentation of the log-keyword. |
This article describes logging for containers in HAProxy for containers in 1.9: https://www.haproxy.com/blog/introduction-to-haproxy-logging/#haproxy-logging-configuration. Using HAProxy 1.9.x, I'm able to direct logs to Docker's defined logging mechanism using this configuration:
|
* Update system-config from branch 'master' - Collect haproxy logs via syslog Haproxy wants to log to syslog (and not stdout for performance reasons, see dockerfile/haproxy#3). However there is no running syslog in our haproxy container. What we can do is mount in the host's /dev/log and have haproxy write to the hosts syslog to get logging. Do this via a docker compose volume bind mount. Change-Id: Icf4a91c2bc5f5dbb0bfb9d36e7ec0210c6dc4e90
The instructions are explicit about using
127.0.0.1
for logging, but AFAIK, the rsyslogd daemon is not started within the container (if it's even installed). Am I missing something?I am now leaning towards
--link
ing this container towards a rsyslogd one, to use it as log target within the configuration. Anyone else had a similar approach?Thanks!
The text was updated successfully, but these errors were encountered: