-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Helm] Chart does not mount /var/log/containers/
on Elastic-Agent pod, thus no logs are ingested
#6204
Comments
Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane) |
thanks for the issue @belimawr, so what happens here is that under fleet mode the chart does nothing that derives from an integration config, as this is controlled by fleet. As a result, this choice leads to volument mounts and other bits, as you already observed, to not be in the k8s manifest of elastic-agent. Specifically in this case the container logs volume mount which is injected when a user selects to enable the kubernetes integration doesn't apply here. Most probably we want to alter this behaviour and allow a user to enable integrations even when deploying a managed by fleet elastic-agent which results in maintaining these bits and not the elastic-agent config cc @ycombinator this probably needs some prio 🙂 |
Because the default configuration of the Kubernetes integration is to collect pod logs, I believe we should have the mount enabled by default in the helm chart. |
I hear you and you already captured something that is not enabled by default, the Kubernetes integration 🙂 What I mean is that the fact that a user installs a Fleet-managed agent in practice is not tightly coupled with the Kubernetes integration and the volume mount is only needed for the latter. Thus having by default on in the kubernetes integration (which the user can enable by |
I see your point, it makes sense. |
In general, it is the case that a user can add integrations in Fleet that their agent deployment in K8s is unable to run, due to missing mounts or permissions. Container logs are just the most obvious example of this, but it applies just as well to system metrics (require a /proc mount from the Node) or cluster metrics (require specific RBAC). In the absence of an actual operator that Fleet could talk to, the best we can do is either:
Right now, I think we're going with 2? But it's a valid discussion to have. |
1 is closer to the current strategy in the reference configurations. https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-managed/elastic-agent-managed-daemonset.yaml. This makes sure everything works when getting started and experimenting. The drawback is we get regular requests to explain or reduce the mounts and privilege level to the minimum required for a use case. Often these requests block deployment or adoption of agent until they are resolved. My current view on this would be that:
This would establish that on a native system where agent runs as a service, system is the default integration. When run on Kubernetes, the Kubernetes integration is the default integration because the cluster is the system in this world. CC @mlunadia in case you have opinions here. |
8.16.0
,main
,The Helm chart does not mount
/var/log/containers/
into the Elastic-Agent container, therefore it cannot read logs from any container in the cluster.Steps to reproduce
event.dataset: kubernetes.container_logs
Workaround
Use the manifest provided in Kibana when selecting "Add agent" -> "Kubernetes"
The text was updated successfully, but these errors were encountered: