Skip to content

Files

Latest commit

520ba3b · Jun 30, 2021

History

History
111 lines (73 loc) · 5.75 KB

FAQ.md

File metadata and controls

111 lines (73 loc) · 5.75 KB

FAQ

Table of Contents

What metrics are exposed by the metrics server?

Metrics server collects resource usage metrics needed for autoscaling: CPU & Memory. Metrics values use Metric System prefixes (n = 10-9 and Ki = 210), the same as those used to define pod requests and limits. Metrics server itself is not responsible for calculating metric values, this is done by Kubelet.

How CPU usage is calculated?

CPU is reported as the average core usage measured in cpu units. One cpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providers and 1 hyperthread on bare-metal Intel processors.

This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). Time window used to calculate CPU is exposed under window field in Metrics API.

Read more about Meaning of CPU.

How memory usage is calculated?

Memory is reported as the working set at the instant the metric was collected, measured in bytes.

In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure. However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate. It includes all anonymous (non-file-backed) memory since Kubernetes does not support swap. The metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim such pages.

Read more about Meaning of memory.

How does the metrics server calculate metrics?

Metrics Server itself doesn't calculate any metrics, it aggregates values exposed by Kubelet and exposes them in API to be used for autoscaling. For any problem with metric values please contact SIG-Node.

How often is metrics server released?

There is no hard release schedule. A release is done after an important feature is implemented or upon request.

Can I run two instances of metrics-server?

Yes, but it will not provide any benefits. Both instances will scrape all nodes to collect metrics, but only one instance will be actively serving metrics API.

How to run metrics-server securely?

Suggested configuration:

  • Cluster with RBAC enabled
  • Kubelet read-only port port disabled
  • Validate kubelet certificate by mounting CA file and providing --kubelet-certificate-authority flag to metrics server
  • Avoid passing insecure flags to metrics server (--deprecated-kubelet-completely-insecure, --kubelet-insecure-tls)
  • Consider using your own certificates (--tls-cert-file, --tls-private-key-file)

How to run metric-server on different architecture?

Starting from v0.3.7 docker image k8s.gcr.io/metrics-server/metrics-server should support multiple architectures via Manifests List. List of supported architectures: amd64, arm, arm64, ppc64le, s390x.

What Kubernetes versions are supported?

Metrics server is tested against the last 3 Kubernetes versions.

How is resource utilization calculated?

Metrics server doesn't provide resource utilization metrics (e.g. percent of CPU used). Utilization presented by kubectl top and HPA is calculated client side based on pod resource requests or node capacity.

How to autoscale Metrics Server?

Metrics server scales linearly vertically according to the number of nodes and pods in a cluster. This can be automated using addon-resizer.

Can I get other metrics beside CPU/Memory using Metrics Server?

No, metrics server was designed to provide metrics for resource metrics pipeline used for autoscaling.

How large can clusters be?

Metrics Server was tested to run within clusters up to 5000 nodes with an average pod density of 30 pods per node.

How often metrics are scraped?

Default 60 seconds, can be changed using metric-resolution flag. We are not recommending setting values below 15s, as this is the resolution of metrics calculated by Kubelet.