You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the fluent-logger-golang version was updated to v1.4.0 here I believe a semantic change was inadvertently introduced which changes the meaning of the fluent-buffer-limit option.
As part of the v1.40 changes fluent-logger-golang the BufferLimit option was changed to be used as the buffer limit for a channel instead of bytes here.
This results in allocation of 8x the memory usage on startup when this option is set along with the async option due to allocating pointers instead of bytes.
Steps to reproduce the issue:
Have dockerd 19.03
Run a command like below. The container image used doesn't matter.
docker run --rm --log-driver=fluentd --log-opt fluentd-async-connect=true --log-opt fluentd-buffer-limit=1g alpine /bin/sh
It's easier to see the effect with larger buffer sizes but the behavior is the same regardless of buffer size afaict.
Describe the results you received:
See that memory usage has increased by ~8g instead of 1g (or if this puts you oom it will crash dockerd).
Describe the results you expected:
I am not 100% sure here. The semantics changed but the previous usage was under-documented. The documentation here says for the fluentd-buffer-limit option:
The amount of data to buffer before flushing to disk. Defaults to the amount of RAM available to the container.
It's not explicity stated that it is meant to be in bytes but the function here definitely makes it seem like that was the intent. The default also appears to be 1MB and not the amount of ram available in the container.
Additional Question:
I also had a question about the expected memory usage of this option. Is it expected to remain used even after the container exits?
The behavior I am seeing here is that if I run this command once, the memory usage will increase by ~8g. It does not go down when the container exits.
docker run --rm --log-driver=fluentd --log-opt fluentd-async-connect=true --log-opt fluentd-buffer-limit=1g alpine /bin/sh
If I run this a second time, because my machine has ~16g of memory it will crash (presumably because it tries to allocate an additional 8g. Is this expected?
@thaJeztah: In the linked issue on fluent-logger-golang I think we found the root commit where this was changed. I'd be interested in what you think the best update here is. I think changing it is a little tricky because changing the fluentd-buffer-limit to match the current meaning from fluent-logger-golang (which seems like the right move) is not a backwards compatible change since people would no longer be able to pass values like 1M. We could leave that I suppose but it's unintuitive.
Hi, I'd like to re-raise this issue. I cut a new issue to fluent-logger-golang (fluent/fluent-logger-golang#128) to try to get a new option to limit the number of bytes buffered without changing the current semantics again.
Description
When the fluent-logger-golang version was updated to v1.4.0 here I believe a semantic change was inadvertently introduced which changes the meaning of the fluent-buffer-limit option.
As part of the v1.40 changes fluent-logger-golang the BufferLimit option was changed to be used as the buffer limit for a channel instead of bytes here.
This results in allocation of 8x the memory usage on startup when this option is set along with the async option due to allocating pointers instead of bytes.
Steps to reproduce the issue:
It's easier to see the effect with larger buffer sizes but the behavior is the same regardless of buffer size afaict.
Describe the results you received:
See that memory usage has increased by ~8g instead of 1g (or if this puts you oom it will crash dockerd).
Describe the results you expected:
I am not 100% sure here. The semantics changed but the previous usage was under-documented. The documentation here says for the fluentd-buffer-limit option:
It's not explicity stated that it is meant to be in bytes but the function here definitely makes it seem like that was the intent. The default also appears to be 1MB and not the amount of ram available in the container.
Additional Question:
I also had a question about the expected memory usage of this option. Is it expected to remain used even after the container exits?
The behavior I am seeing here is that if I run this command once, the memory usage will increase by ~8g. It does not go down when the container exits.
If I run this a second time, because my machine has ~16g of memory it will crash (presumably because it tries to allocate an additional 8g. Is this expected?
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.):
My testing was done on an m4.xlarge on AWS with Amazon linux 2.
The text was updated successfully, but these errors were encountered: