Certified Kubernetes Administrator Exam Series (Part-3): Logging & Monitoring
Introduction to Logging and Monitoring
Logging and Monitoring of Kubernetes cluster components and containerized applications is crucial to debugging and lifecycle management.
Where a basic difference between the two is –
- Monitoring involves the consistent examination of components to gain an understanding of their health status.
- Logging involves the recording of cluster events so that problems in application runtime can easily be traced back to the source.
This section introduces the various monitoring and logging options available for Kubernetes clusters.
Monitor Cluster Components
When monitoring the elements of a Kubernetes cluster, we lookout for two types of metrics:
- POD-level Metrics – include performance data such as CPU and memory requirements.
- Node-level Metrics – include the number of healthy nodes, CPU usage, Memory Requirements, and networking.
A monitoring solution should be able to retrieve these metrics, store data, and offer analytics for optimized clusters. Heapster was an original monitoring feature created for Kubernetes clusters but was later deprecated and replaced by Metrics Server– a more trimmed down version. The Metrics Server essentially receives aggregates, and stores performance metrics in memory. Other advanced solutions can also store historical cluster performance data on-disk.
Just a quick note that Kubernetes does not have an out-of-the-box complete monitoring solution. There are, however, production-capable open-source monitoring solutions such as Metrics Server, Elastic Stack, Prometheus, and many more. There are also proprietary solutions such as Datadog and Dynatrace for enterprise-level monitoring capabilities.
The Kubelet service receives instructions from Kube-API Server on how to run a POD. It also contains an element known as the container advisor (cAdvisor), which receives POD performance metrics and exposes them to the Metrics Server through Kube-API Server.
For a minikube
cluster, the server is enabled using the command:
$ minikube addons enable metrics-server
When the server has been enabled, it returns the command:
The 'metrics-server' addon is enabled
For other types of Kubernetes clusters, the metrics server is installed by first cloning the Git repository:
$ git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git
The Metrics Server is then enabled using the command:
$ cd kubernetes-metrics-server
$ kubectl create -f .
With the Metrics Server enabled, node-level metrics can be accessed with the command:
$ kubectl top node
To access the POD-level metrics, the following command is used:
$ kubectl top pod
Managing Application Logs
In Docker, we run an event simulator container to check application event logs. The event simulator can be downloaded from KodeKloud’s Git Repository using the following:
$ git clone https://github.com/kodekloudhub/event-simulator.git
$ cd event-simulator
$ docker build -t event-simulator .
The event simulator is then initiated by running the command:
$ docker run event-simulator
This container generates and displays random events, emulating a web server. The application then streams these events to the standard output. Running this container in detached mode records the metrics without displaying them:
$ docker run -d event-simulator
For Kubernetes, event logs can be viewed using a POD definition file that uses the event-simulator image. The specifications for the PODs YAML file are:
apiVersion: v1
kind: Pod
metadata:
name: event-simulator
spec:
containers:
- name: event-simulator
image: kodekloud/event-simulator
The simulator is then created by running the POD:
$ kubectl create -f event-simulator-pod.yaml
We can now stream the metrics live by checking these PODs logs:
$ kubectl logs -f event-simulator
Research Questions & Conclusion
This concludes the Logging & Monitoring section of the CKA certification exam. To test your knowledge, it is strongly recommended that you access research questions of all core concepts covered in the coursework and a test to prepare you for the exam. You can also send your feedback to the course developers, whether you have feedback or would like something changed within the course.
Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back.
Quick Tip – Questions below may include a mix of DOMC and MCQ types.
1. Which command can be used to identify the node that consumes the most CPU resource?
[A] kubectl get nodes
[B] kubectl list nodes
[C] kubectl htop nodes
[D] kubectl top nodes
2. Which command can be used to identify the pod that consumes the most memory?
[A] kubectl get pods
[B] kubectl list pods
[C] kubectl htop pods
[D] kubectl top pods
3. What is the command to get logs of the pod webapp
and store these logs under /root/webapp.log
?
4. Metrics Server discovers all nodes on the cluster and queries each node’s kubelet for CPU and memory usage.
[A] True
[B] False
Conclusion
Microservices architecture introduces a layer of complexity in Kubernetes applications, necessitating continuous monitoring of cluster performance. This is because a single application may contain a large number of services on different PODs interfaced with each other, introducing multiple points of failure. Monitoring and logging of events also help in anticipating problems before they completely wreck the application in production.
This part of the course explored basic tools needed for logging and monitoring Kubernetes clusters and applications. KodeKloud’s hands-on course also includes in-depth practicals with advanced monitoring tools so that learners can familiarize themselves with the practicalities of logging and monitoring in the production environment.
Responses