Knative Serving offers two different monitoring setups: Elasticsearch, Kibana, Prometheus and Grafana or Stackdriver, Prometheus and Grafana You can install only one of these two setups and side-by-side installation of these two are not supported.
The following instructions assume that you cloned the Knative Serving repository. To clone the repository, run the following commands:
git clone https://github.com/knative/serving knative-serving
cd knative-serving
git checkout v0.2.3
If you installed the full Knative release, the monitoring component is already installed and you can skip down to the Create Elasticsearch Indices section.
To configure and setup monitoring:
-
Choose a container image that meets the Fluentd image requirements. For example, you can use the public image k8s.gcr.io/fluentd-elasticsearch:v2.0.4. Or you can create a custom one and upload the image to a container registry which your cluster has read access to.
-
Follow the instructions in "Setting up a logging plugin" to configure the Elasticsearch components settings.
-
Install Knative monitoring components by running the following command from the root directory of knative/serving repository:
kubectl apply --recursive --filename config/monitoring/100-common \ --filename config/monitoring/150-elasticsearch \ --filename third_party/config/monitoring/common \ --filename third_party/config/monitoring/elasticsearch \ --filename config/monitoring/200-common \ --filename config/monitoring/200-common/100-istio.yaml
The installation is complete when logging & monitoring components are all reported
Running
orCompleted
:kubectl get pods --namespace knative-monitoring --watch
NAME READY STATUS RESTARTS AGE elasticsearch-logging-0 1/1 Running 0 2d elasticsearch-logging-1 1/1 Running 0 2d fluentd-ds-5kc85 1/1 Running 0 2d fluentd-ds-vhrcq 1/1 Running 0 2d fluentd-ds-xghk9 1/1 Running 0 2d grafana-798cf569ff-v4q74 1/1 Running 0 2d kibana-logging-7d474fbb45-6qb8x 1/1 Running 0 2d kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d node-exporter-cr6bh 2/2 Running 0 2d node-exporter-mf6k7 2/2 Running 0 2d node-exporter-rhzr7 2/2 Running 0 2d prometheus-system-0 1/1 Running 0 2d prometheus-system-1 1/1 Running 0 2d
CTRL+C to exit watch.
-
Verify that each of your nodes have the
beta.kubernetes.io/fluentd-ds-ready=true
label:kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
-
If you receive the
No Resources Found
response:-
Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
-
Run the following command to ensure that the
fluentd-ds
daemonset is ready on at least one node:kubectl get daemonset fluentd-ds --namespace knative-monitoring
-
To visualize logs with Kibana, you need to set which Elasticsearch indices to
explore. We will create two indices in Elasticsearch using Logstash
for
application logs and Zipkin
for request traces.
-
To open the Kibana UI (the visualization tool for Elasticsearch), you must start a local proxy by running the following command:
kubectl proxy
This command starts a local proxy of Kibana on port 8001. For security reasons, the Kibana UI is exposed only within the cluster.
-
Navigate to the Kibana UI. It might take a couple of minutes for the proxy to work.
-
Within the "Configure an index pattern" page, enter
logstash-*
toIndex pattern
and select@timestamp
fromTime Filter field name
and click onCreate
button.
- To create the second index, select
Create Index Pattern
button on top left of the page. Enterzipkin*
toIndex pattern
and selecttimestamp_millis
fromTime Filter field name
and click onCreate
button.
You must configure and build your own Fluentd image if either of the following are true:
- Your Knative Serving component is not hosted on a Google Cloud Platform (GCP) based cluster.
- You want to send logs to another GCP project.
To configure and setup monitoring:
-
Choose a container image that meets the Fluentd image requirements. For example, you can use a public image. Or you can create a custom one and upload the image to a container registry which your cluster has read access to.
-
Follow the instructions in "Setting up a logging plugin" to configure the stackdriver components settings.
-
Install Knative monitoring components by running the following command from the root directory of knative/serving repository:
kubectl apply --recursive --filename config/monitoring/100-common \ --filename config/monitoring/150-stackdriver \ --filename third_party/config/monitoring/common \ --filename config/monitoring/200-common \ --filename config/monitoring/200-common/100-istio.yaml
The installation is complete when logging & monitoring components are all reported
Running
orCompleted
:kubectl get pods --namespace knative-monitoring --watch
NAME READY STATUS RESTARTS AGE fluentd-ds-5kc85 1/1 Running 0 2d fluentd-ds-vhrcq 1/1 Running 0 2d fluentd-ds-xghk9 1/1 Running 0 2d grafana-798cf569ff-v4q74 1/1 Running 0 2d kube-state-metrics-75bd4f5b8b-8t2h2 4/4 Running 0 2d node-exporter-cr6bh 2/2 Running 0 2d node-exporter-mf6k7 2/2 Running 0 2d node-exporter-rhzr7 2/2 Running 0 2d prometheus-system-0 1/1 Running 0 2d prometheus-system-1 1/1 Running 0 2d
CTRL+C to exit watch.
-
Verify that each of your nodes have the
beta.kubernetes.io/fluentd-ds-ready=true
label:kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true
-
If you receive the
No Resources Found
response:-
Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes:
kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true"
-
Run the following command to ensure that the
fluentd-ds
daemonset is ready on at least one node:kubectl get daemonset fluentd-ds --namespace knative-monitoring
-
- Learn more about accessing logs, metrics, and traces:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.