Metrics For The Kubernetes Control Plane
System component metrics can give a better look into what is happening inside them. Metrics are particularly useful for building dashboards and alerts.
Metrics in Kubernetes control plane are emitted in prometheus format and are human readable.
Metrics in Kubernetes
In most cases metrics are available on /metrics
endpoint of the HTTP server. For components that doesn’t expose endpoint by default it can be enabled using --bind-address
flag.
Examples of those components: * kube-controller-managerControl Plane component that runs controller processes. * kube-proxykube-proxy is a network proxy that runs on each node in the cluster. * kube-apiserverControl plane component that serves the Kubernetes API. * kube-schedulerControl plane component that watches for newly created pods with no assigned node, and selects a node for them to run on. * kubeletAn agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
In a production environment you may want to configure Prometheus Server or some other metrics scraper to periodically gather these metrics and make them available in some kind of time series database.
Note that kubeletAn agent that runs on each node in the cluster. It makes sure that containers are running in a pod. also exposes metrics in /metrics/cadvisor
, /metrics/resource
and /metrics/probes
endpoints. Those metrics do not have same lifecycle.
If your cluster uses RBACManages authorization decisions, allowing admins to dynamically configure access policies through the Kubernetes API.
, reading metrics requires authorization via a user, group or ServiceAccount with a ClusterRole that allows accessing /metrics
.
For example:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- nonResourceURLs:
- "/metrics"
verbs:
- get
Metric lifecycle
Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deletion
Alpha metrics have no stability guarantees; as such they can be modified or deleted at any time.
Stable metrics can be guaranteed to not change; Specifically, stability means:
- the metric itself will not be deleted (or renamed)
- the type of metric will not be modified
Deprecated metric signal that the metric will eventually be deleted; to find which version, you need to check annotation, which includes from which kubernetes version that metric will be considered deprecated.
Before deprecation:
# HELP some_counter this counts things
# TYPE some_counter counter
some_counter 0
After deprecation:
# HELP some_counter (Deprecated since 1.15.0) this counts things
# TYPE some_counter counter
some_counter 0
Once a metric is hidden then by default the metrics is not published for scraping. To use a hidden metric, you need to override the configuration for the relevant cluster component.
Once a metric is deleted, the metric is not published. You cannot change this using an override.
Show Hidden Metrics
As described above, admins can enable hidden metrics through a command-line flag on a specific binary. This intends to be used as an escape hatch for admins if they missed the migration of the metrics deprecated in the last release.
The flag show-hidden-metrics-for-version
takes a version for which you want to show metrics deprecated in that release. The version is expressed as x.y, where x is the major version, y is the minor version. The patch version is not needed even though a metrics can be deprecated in a patch release, the reason for that is the metrics deprecation policy runs against the minor release.
The flag can only take the previous minor version as it’s value. All metrics hidden in previous will be emitted if admins set the previous version to show-hidden-metrics-for-version
. The too old version is not allowed because this violates the metrics deprecated policy.
Take metric A
as an example, here assumed that A
is deprecated in 1.n. According to metrics deprecated policy, we can reach the following conclusion:
- In release
1.n
, the metric is deprecated, and it can be emitted by default. - In release
1.n+1
, the metric is hidden by default and it can be emitted by command lineshow-hidden-metrics-for-version=1.n
. - In release
1.n+2
, the metric should be removed from the codebase. No escape hatch anymore.
If you’re upgrading from release 1.12
to 1.13
, but still depend on a metric A
deprecated in 1.12
, you should set hidden metrics via command line: --show-hidden-metrics=1.12
and remember to remove this metric dependency before upgrading to 1.14
Component metrics
kube-controller-manager metrics
Controller manager metrics provide important insight into the performance and health of the controller manager. These metrics include common Go language runtime metrics such as go_routine count and controller specific metrics such as etcd request latencies or Cloudprovider (AWS, GCE, OpenStack) API latencies that can be used to gauge the health of a cluster.
Starting from Kubernetes 1.7, detailed Cloudprovider metrics are available for storage operations for GCE, AWS, Vsphere and OpenStack. These metrics can be used to monitor health of persistent volume operations.
For example, for GCE these metrics are called:
cloudprovider_gce_api_request_duration_seconds { request = "instance_list"}
cloudprovider_gce_api_request_duration_seconds { request = "disk_insert"}
cloudprovider_gce_api_request_duration_seconds { request = "disk_delete"}
cloudprovider_gce_api_request_duration_seconds { request = "attach_disk"}
cloudprovider_gce_api_request_duration_seconds { request = "detach_disk"}
cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
What's next
- Read about the Prometheus text format for metrics
- See the list of stable Kubernetes metrics
- Read about the Kubernetes deprecation policy
Feedback
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.