-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kube-prometheus-stack] Default port on which ETCD exposes its metrics #3725
base: main
Are you sure you want to change the base?
Conversation
By default ETCD exposes its metrics on port 2379 and not 2381 Signed-off-by: xogoodnow <[email protected]>
Default ETCD port
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The helm chart prefers to enable secure settings per default and port 2381 is etcd TLS port that is used to fetch metrics. It is possible that your clusters are either old or not providing a secure access to etcd in which case you need to configure the port to 2379 but that should only be the case for your clusters.
Hi @xogoodnow, thank you for the contribution. Please check #2264 regarding the ETCD ports. |
There are no documents or comments on how to persist grafana. Charts must be read to understand how to persist it which is a unnecessary with a few lines of comments Signed-off-by: xogoodnow <[email protected]>
Hi @QuentinBisson, |
Hi @GMartinez-Sisti , Just deploy the latest version of the cluster and install the chart, it does not work. |
My impression is that introducing 2381 as an Etcd metrics port in the chart was based on a specific internal change in kubeadm, judging from the link to its constants above and in the original PR. From Etcd metrics-endpoint
The client port is indeed 2379/tcp and supposed to always be protected by TLS. kubeadm introduced port 2381 in a change to access the Etcd's The changes in kubeadm have no effect on how Etcd primarily serves its metrics: through the client port. |
IMHO if we decide to go ahead and change the port, it should be considered a breaking change to ensure we don't break existing installations. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
Hi, considering that this would be a breaking change, do we have any estimation on how and when to make these changes on the chart? |
Bump the helm major chart version + upgrade notes on the README.md is the way to go for an breaking change and needs to be part of the PR itself. |
## To make Grafana persistant (Using Statefulset) | ||
## | ||
# persistence: | ||
# enabled: true | ||
# type: sts | ||
# storageClassName: <StorageClassName> | ||
# accessModes: | ||
# - ReadWriteOnce | ||
# size: 20Gi | ||
# finalizers: | ||
# - kubernetes.io/pvc-protection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These shouldn't be here. Can you please remove them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As @jkroepke said, please do:
Bump the helm major chart version + upgrade notes on the README.md is the way to go for an breaking change and needs to be part of the PR itself.
Omitted the part about persisting Grafana. p.s: Sent a separate PR for that Thankyou Signed-off-by: Ali <[email protected]>
Changed the chart version number prometheus-community#4075 Signed-off-by: Ali <[email protected]>
Fixed a typo Signed-off-by: Ali <[email protected]>
Bumped the chart version prometheus-community#4075 Signed-off-by: Ali <[email protected]>
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
Hi |
Yes, please take note of the review comment from @GMartinez-Sisti + resolve MR conflicts |
hi @jkroepke, |
Hi @xogoodnow currently, I'm missing everything what is requested. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
By default ETCD exposes its metrics on port 2379 and not 2381
What this PR does / why we need it
By default ETCD exposes its metrics on port 2379 and not 2381
Which issue this PR fixes
No service (by default) is listening on port 2381 on the cluster, so after installing the chart, the metrics for etcd would not be monitored unless the port in values.yaml (or ETCD configuration) is changed.
(optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged)Changing eh default port for etcd metrics from 2381 to 2379 in the following path
kubeEtcd.service. port
AND
kubeEtcd.service. targetPort
Special notes for your reviewer
Checklist
[prometheus-couchdb-exporter]
)