fbpx

Docker Certified Associate Exam Series (Part-3): Kubernetes

Introduction to Kubernetes

Kubernetes is the most popular container orchestration tool that integrates seamlessly with Docker. The DCA certification test emphasizes specifically on services, architecture, and commands used in Kubernetes, so it is crucial to develop a firm grasp of the framework. The Kubernetes section of the DCA exam preparation tutorials combines the Kubernetes for Beginners and Kubernetes CKAD curriculum. This makes it perfect for those either starting out or experienced in Kubernetes to refresh their skills. 

Kubernetes Architecture

Before we go deep into the Kubernetes Architecture, let us understand its CLI first. 

The Docker CLI only lets you deploy a single instance of an app with the run command. On the contrary, the Kubernetes CLI (known as kubectl) lets you deploy many instances of your app using a single run command. You can also scale this number up or down with a single command, as required. Kubernetes also has the ability to scale your application automatically depending on usage through the use of POD Autoscalers and Cluster AutoScalers. 

Performing rolling updates on your applications is easy using the command:

$ kubectl rolling-update my-web-server --image=web-server:2

Rolling back a failed update is also performed using the single command:

$ kubectl rolling-update my-web-server --rollback

This allows you to test updates on a few instances of your application before rolling them out into production.

Kubernetes is built on an Open Architecture that supports all third-party network and storage renders. In fact, all brands have plugins that allow for easy integration with Kubernetes. Kubernetes uses Containerd as a container runtime platform to run applications on containers. On top of that, Kubernetes also integrates with other container runtimes such as Crio and Rocket that provide a handy benefit for those who are using runtime platforms other than Docker.

Kubernetes creates clusters made of several nodes. A node is a virtual or physical machine that runs the Kubernetes framework. These nodes are worker machines that run instances of an application. Having several nodes in a cluster makes an application highly fault-tolerant. A master node has the Kubernetes control plane elements installed and manages containers in a cluster.

The components of the Kubernetes framework installed on a machine are:

  • the API Server: This is the Kubernetes front-end that facilitates communication with users, other applications, and the Kubernetes CLI.
  • Kubelet Service: The agent that runs on every node within a cluster and makes sure that containers are running in a POD.
  • Etcd: The distributed key-value storage that stores data used to manage the data in a cluster. It also helps prevent collisions between various containers.
  • the Container Runtime: Underlying software platform onto which the framework and containers are written.
  • Controllers: These are Kubernetes intelligent agents that detect faults and failures in containers, thereby managing the state of your running cluster and resources. These agents always run inside the kube-controller-manager.
  • Scheduler: Distributes the workloads between containers.
  • Kubectl (Kube Control Tools) is the command-line interface (CLI) used to deploy and manage applications in a Kubernetes cluster. Some common commands in Kubernetes include:
TaskCommand
Run an App named hello-miniscule$ kubectl run hello-miniscule --image=nginx
Display information on a cluster$ kubectl cluster-info
To list the number of nodes$ kubectl get nodes
Scale an App with 100 Replicas$ kubectl run my-web-app --image=my-web-app --replicas=100

PODs

Before you start working with Kubernetes PODs, the following conditions should be met:

  • An application has been developed and built within a Docker image and made available through the Docker repository.
  • A Kubernetes cluster is set up and working.
  • All services in the cluster should be in running mode.

Kubernetes does not deploy containers directly onto nodes. Instead, it encapsulates them using PODs. A POD is the smallest object you can deploy using Kubernetes, and each POD runs a single instance of an application. PODs help you scale your application by adding a new POD with a new application instance within the same node. You can also deploy PODs on new nodes to help improve capacity and balance loads. 

Multi-Container PODs

PODs typically have a 1:1 relationship with containers. A single POD can, however, run multiple containers so long as they are not of the same type. For instance, a POD can run your application container plus a helper container that provides support tasks. Since these containers share a network and storage space, they can easily communicate by referring to each other as localhost. 

To deploy a POD, use the Kubernetes run command:

$ kubectl run nginx --image=nginx

To get the running PODs in a node, use the command:

$ kubectl get pods

Once you have developed familiarity with PODs, you can proceed to the demo lesson, where you will learn the practical aspects of deploying PODs.

PODs with YAML

It is easy to create a POD using a YAML configuration file. For this class, let us create a run a POD using the file pod-definition.yml:

To successfully create a YAML configuration file, we’ll need an understanding of YAML’s root level properties (required fields). There are four main fields in a YAML file: apiVersion, kind, metadata, and spec.

  • The apiVersion is the version of Kubernetes API that is used for deployment. For a POD, we’ll use the version v1. Other versions available include: apps/v1 and v1 beta.
  • The Kind is the type of object being deployed. For our project, this is the Pod. Other kinds supported by Kubernetes include: Service, ReplicaSets, and Deployments.
  • The Metadata displays properties about your deployment. These include names and labels. These are nested as children of the metadata field, and are usually indented leftwards in a configuration file.
  • The Spec field typically takes in additional information about your POD. This mainly includes the containers running in the POD. Since the container names and specifications are children on the spec field, they are indented left, indicating nesting.

Here is the sample YAML definition file for a Pod based on the nginx image.

apiVersion: v1
kind: pod
metadata: 
  name: pod-definition
  labels:
    app: nginx
    tier: frontend
spec:
  containers:
  - name: nginx
    image: nginx

Once you have specified all required fields, you can deploy your pod using the command:

$ kubectl create -f pod-definition.yml

To see the pods running in your cluster, run the command:

$ kubectl get pods

To get information on a particular POD, run the command:

$ kubectl describe pod myapp.pod

Once you are thorough with this section, head on over to the demo class and research questions to test your grasp.

ReplicaSets and Replication Controllers

Controllers are the intelligent agents of Kubernetes since they monitor the state of objects and respond accordingly. The Replication Controller is one such agent, and it helps to run several instances of a POD in a cluster, leading to high availability. The Replica Controller will automatically bring up a new POD when one fails within a cluster. The Replica Controller also works to ensure that the specified number of PODs are running at all times. The Replica Controller also creates multiple PODs and distributes workloads among them, allowing for scaling and load balancing.

Creating a Replica Controller

To create a Replica Controller, you define its properties in a YAML file. The code block below shows the fields required to create a Replication Controller called myapp-rc.

apiVersion: v1
kind: ReplicationController
metadata: 
  name: myapp-rc
  labels:
    app: myapp
    type: front-end
spec:
  -template
     metadata:
       name: myapp-pod
       labels:
         app: myapp
         type: front-end
      spec:
        containers:
        - name: nginx-controller
          image: nginx
    replicas: 3

The POD template is extracted from the POD YAML definition file.
You will also need to define the number of replicas, nestled within the spec field as a sibling of the template.

To then run this replication controller in Kubernetes, use the command:

$ kubectl create -f rc-definition.yml

To a list of replication controllers, use the command:

$ kubectl get replicationcontroller

To list the pods running a replication controller:

$ kubectl get pods

Creating a ReplicaSet

A ReplicaSet performs most functions similar to the Replication Controller, except that it uses a Selector and Labels to filter and identify PODs. The code block below shows the YAML definition file used in creating a ReplicaSet.

apiVersion: apps/v1
kind: ReplicaSet
metadata: 
  name: myapp-replicaset
  labels:
    app: myapp
    type: front-end
spec:
  -template
     metadata:
       name: myapp-pod
       labels:
         app: myapp
         type: front-end
      spec:
        containers:
        - name: nginx-container
          image: nginx
    replicas: 3
    selector:
      matchLabels:
        type: front-end

Once you have defined the YAML file, you can deploy this ReplicaSet using the command:

$ kubectl create -f replicaset-definition.yml

Labels and Selectors

ReplicaSets use Selectors to find the right PODs that need to be monitored or replaced. They can do this by looking at Labels defined in the YAML definition file. 

There are three ways to scale your application using ReplicaSets:

  1. You can change the number of Replicas listed in the YAML file then run the Kubernetes replace command:
$ kubectl replace  -f replicaset-definition.yml

2. Secondly, you can also scale the number of replicas directly using the command:

$ kubectl scale --replicas=n -f replicaset-defintion.yml

3. Lastly, you may scale your application by increasing the number of instances within the app using the command:

$ kubectl scale --replicas=6 replicaset myapp-replicaset

The table below highlights some common commands you’ll use for ReplicaSets and Controllers.

ActionCommand
Deploy a ReplicaSet$ kubectl create -f replicaset-definition.yml
Display all ReplicaSets running in a Cluster$ kubectl get replicaset
To remove a ReplicaSet$ kubectl delete replicaset myapp-replicaset
Update a ReplicaSet$ kubectl replace -f replicaset-definition.yml
Scale a ReplicaSet$ kubectl scale -replicas=6 -f replicaset-definition.yml

Deployments

In Kubernetes, Deployment is an object that lets you perform rolling updates and upgrades on various instances of an application within a ReplicaSet. To create a Kubernetes deployment, you can specify its contents in a YAML file, similar to a ReplicaSet but with Kind being a deployment

Below is the YAML file for a deployment called deployment-definition.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-definition
  labels:
    app: web
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: front-end
          image: nginx
          ports:
            - containerPort: 80
        - name: rss-reader
          image: nickchase/rss-php-nginx:v1
          ports:
            - containerPort: 88

Some common commands you’ll use when working with deployments include:

TaskCommand
Run a Deployment$ kubectl create -f deployment-definition.yml
Display running deployments$ kubectl get deployments
Update a Deployment$ kubectl apply -f deployment-definition.yml
$ kubectl set image deployment/myapp-deployment nginx=nginx:1.9.1
Check the status of your rollouts$ kubectl rollout status deployment/myapp-deployment
Display Rollout History$ kubectl rollout history deployment/myapp-deployment
Roll Back Changes$ kubectl rollout undo deployment/myapp-deployment

Update and Rollbacks

A Rollout is a version of your application that is triggered when you create a deployment. This is the first version (Revision 1) of your application. When you update your application, a new version (Revision 2) is created. If you are unsatisfied with the updates, you can Rollback the changes and have Revision 1 of your application running.

To create a newer rollout of your application, use the command:

$ kubectl rollout status deployment/myapp-deployment

To check the history of your deployment rollouts:

$ kubectl rollout history deployment/myapp-deployment

Deployment Strategies

There are two strategies you can use to deploy revisions of your application. The Recreate strategy is one in which you destroy all running instances of your application and replace them with newer ones. This typically causes application downtime during upgrades. The Rolling Update strategy lets you take down application instances and release new ones one-at-a-time. The Rolling Update is the default strategy used by Kubernetes deployments on applications.

Update Strategies

There are two ways you can update your deployment. 

  1. You can specify the changes in the deployments YAML definition file, then use the Kubernetes apply command:
$ kubectl apply -f deployment-definition.yml

2. You can also force changes into your configuration file directly from the command line interface using the set command:

$ kubectl set image deployment\myapp-deployment\nginx=nginx:1.9.1

To check whether your deployment is a recreate or a rolling update, you can use the Kubernetes describe command:

$ kubectl describe deployments

The screenshots below show the difference between a deployment set on Recreate and Rolling updates.

Recreate

Rolling Update

Upgrades

When you launch a new deployment, it automatically creates the underlying ReplicaSet, which also creates a number of PODs based on the required number of Replicas. When performing upgrades, the deployment will create a new ReplicaSet, then take down containers in the old set, replacing them using a Rolling Update. 

To keep track of rolling updates, you can use the command:

$ kubectl get replicasets

Rollbacks

If you are unsatisfied with changes to your application, you can roll them back using the command:

$ kubectl rollout undo deployment/myapp-deployment

Networking

In a node setup, the PODs host containers. Each POD is assigned an IP address in the order of 10.24.x.x. In each Node, Kubernetes creates an internal IP address (10.24.0.0) to which all PODs attach. This default network also assigns IP addresses to all PODs created within the node. If two nodes running Kubernetes have been assigned the IP addresses 192.168.1.2 and 192.168.1.3, there is no automatic network connection between PODs in these nodes.

To establish communication between PODs in different nodes, two conditions should be met:

  1. All containers and PODs can automatically communicate with each other without needing a Network Address Translator (NAT).
  2. Nodes and containers can communicate back and forth without requiring a NAT.

You will then use a custom networking solution (e.g. Cisco, Cilium, Flannel, VMware NSX, etc.) to manage networking within a node and assign addresses to virtual networks within the node. These solutions also create virtual networks within the node and establish communication between nodes using routing algorithms.

Services

In Kubernetes, services are objects that enable communication between applications, components within applications, and communication with external sources. Services, therefore, enable the front ends that allow for communication between users and the application. Services also establish connections between various groups of PODs in a node. Finally, services are the key objects that enable loose coupling between microservices in containerized applications. 

Let’s say you want to access the webserver in a POD whose IP address is 10.244.0.2. If you are within the node, you could use an ssh connection using the command:

$ curl http://10.244.0.2

or you could use a browser to access a graphical web page using the address http:// 10.244.0.2

If you are not within the node, however, you will require the NodePort service which makes internal PODs accessible on ports in a Node. This service receives a request from a port within a node then points this request to a specific POD running the required application.

ClusterIP

The ClusterIP service creates a virtual IP inside a cluster to enable communication between different services and servers in a node. It achieves this by organizing PODs into distinct groups that can be managed from a single interface. You can create the ClusterIP service using a YAML definition file named service-definition.yml with the following specifications:

apiVersion: v1
kind: Service
metadata:
  name: service-definition
spec:
  type: ClusterIP
  selector:
    app: MyApp
    type: Backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Then run this service using the command: 

$ kubectl create -f service-definition.yml

To check the services running in a cluster, use the command:

$ kubectl get services

LoadBalancer

The LoadBalancer service helps you distribute workloads among nodes and PODs running your application. In this lesson, you’ll be introduced to load balancing using the service on the Google Cloud Platform.

Since Google Cloud Platform offers native support for Kubernetes Services, you can easily configure load balancing by creating a YAML definition file with the type being LoadBalancer, as shown below:

apiVersion: v1
kind: Service
metadata: 
  name: myapp-service
spec: 
  type: LoadBalancer
  ports:
    - targetPort: 80
      port: 80
      nodePort: 30008

Demo App

Having grasped the major concepts of Kubernetes, you can test your knowledge through a series of classes to create, deploy, and manage a demo voting app as shown in the image below. 

NameSpaces

In Kubernetes, Namespaces offer a way to organize your clusters into subclusters that are logically separated yet can still communicate with each other. When you create a deployment, Kubernetes automatically creates three NameSpaces: Default, kube-system, and kube-public. You can use namespaces to allocate resource limits and enforce network policies. 

Services within the same NameSpace can communicate using a simple call e.g:

$ mysql.connect("db-service")

To connect to a service in a different namespace, the service has to use the command:

$ mysql.connect("db-service.dev.svc.cluster.local")

The breakdown of the service label is as shown in the screenshot below:

To create a NameSpace, define its attributes in a YAML file and run it using the create command. The specifications for the namespace-dev.yml file are:

apiVersion: V1
kind: Namespace
metadata:
    name: dev

You can then create this namespace by typing the command:

$ kubectl create -f namespace-dev.yml

Here are some common commands you’ll use when dealing with namespaces:

TaskCommand
Deploy a Namespace$ kubectl create -f namespace-dev.yml
View pods running in the current namespace$ kubectl get pods
View pods running in a different namespace$ kubectl get pods --namespace=dev
Switch to another namespace$ kubectl config set-context $(kubectl config current-context) --namespace=dev
View pods running in all namespaces$ kubectl get pods --all-namespaces

Namespaces also allow you to limit resource allocations. To do this, you define a resource quota using a YAML definition file as shown below:

apiVersion: V1
kind: ResourceQuota
metadata: 
    name: compute-quota
    namespace: dev
spec: 
  hard: 
    pods: "10"
    requests.cpu: "4"
    requests.memory: 5Gi
    limits.cpu: "10"
    limits.memory: 10Gi 

Then run the command:

$ kubectl create -f compute-quota.yaml

Commands and Arguments

Docker

In Docker, we use Dockerfiles to define the tasks and processes that run in specific containers. This file contains scripted instructions that guide Docker on how to build the image. There are two types of instructions that can be defined in this script: an ENTRYPOINT and a COMMAND (CMD). CMD instructions are mostly used for creating default tasks that users can easily override. An ENTRYPOINT is used to define containers with specific executables that are hard to override unless the –entrypoint flag is invoked.

Kubernetes

In this short class, you will set up a pod definition file to run the command created in Docker earlier to familiarize yourself with Commands and Arguments in Kubernetes.

The specifications for the pod-definition.yml file are:

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-sleeper-pod
spec:
  containers:
    - name: ubuntu-sleeper
      image: ubuntu-sleeper
      args: ["10"]

You can then deploy this pod using the command:

$ kubectl create -f pod-definition.yml

Specifying arguments in the command line also result in changes to the YAML definition file, as shown below:

The Command field overrides the ENTRYPOINT instruction, while the args field overrides the cmd instruction.

Environment Variables

Docker

Environment Variables let you define dynamic elements for your containerized application. For instance, rather than having to rewrite the code every time you want to change your application’s background color, you could define it as an environment variable. Now all you have to do is include the application color in your docker run command to update the background color:

$ docker run -e APP_COLOR=blue simple-webapp-color

You can run this command multiple times to have different instances of your application working with different background colors, as shown in the screenshot below:

Kubernetes

In Kubernetes, you can set up environment variables by setting them up directly in a POD definition file. The attributes for setting up the environment variable APP_COLOR are shown in the code block below:

apiVersion: v1
kind: Pod
metadata: 
  name: simple-webapp-color
spec:
  containers:
    - name: simple-webapp-color
      image: mmumshad/simple-webapp-color
      ports:
        - containerPort: 8080
      env:
        - name: APP_COLOR
          value: pink

You can also specify an environment variable using Kubernetes secrets and configuration maps, as shown in the screenshot:

Probes

We use probes in Kubernetes to investigate the running and readiness states of containers.

Readiness Probes

The kubelet uses Readiness Probes to know when a container is ready to start accepting traffic. There are two important concepts to understand when dealing with a pod’s lifecycle:

  1. POD status – indicates where a pod is in its lifecycle, and it could be Succeeded, Failed, Unknown, Pending, ContainerCreating or Running.
  2. POD Conditions – arrays of TRUE/FALSE values that give extra details about a POD’s status. The conditions include:PodSchedules, Initialized, ContainersReady and Ready.

To check a pod’s condition, use the command:

$ kubectl describe pod POD-NAME

Readiness probes investigate the ContainersReady and Ready conditions to check whether specific containers can accept traffic. With a readiness probe, you can investigate the actual state of an application as it runs inside a container. You can also define the attributes of your readiness probe inside a POD’s YAML configuration file. 

The code block below shows the YAML definition file of a simple-webapp pod that includes the HTTP readiness probe. 

apiVersion: v1
kind: Pod
metadata:
  name: simple-webapp
spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/liveness
    args:
    - /server
    readinessProbe:
      httpGet:
        path: /healthz
        port: 8080

The screenshot below shows the output of running the HTTP Test as a readiness probe on an application:

Other ways you can test your container’s readiness are the TCP Test and Exec Command. The YAML scripts for these methods are:

Liveness Probes

The liveness probes help check whether the containers that run instances of your application are ready. Liveness probes are configured just like readiness probes, replacing the word readiness with liveness. The YAML definition file for a liveness probe is shown below:

apiVersion: v1
kind: Pod
metadata:
  name: simple-webapp-liveness
spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/liveness
    args:
    - /server
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080

Volumes in Kubernetes

Containers in Docker are transient in nature, that is, they only exist while running a process, and terminate when the process is over. Data in containers is transient too. We use volumes to create persistent data that can be accessed permanently. We also use volumes to turn transient Kubernetes pods into persistent storage. The screenshot below shows how you can assign a volume to a pod using the YAML definition file.


Once we have created a volume, we mount it to a host, making it accessible to users. 

Kubernetes volumes are supported by most third-party host services. For instance, to mount your volume on Amazon’s AWS ElasticBlockStorage (EBS), use the specifications in the screenshot below:

Persistent Volumes

Persistent volumes help you manage storage centrally since it lets your administrators configure a cluster-wide volume when deploying your application. Users can now access parts of this volume using Persistent Volume Claims. This is suitable for a large environment where users deploy multiple pods continuously. 

The screenshot below shows a sample YAML definition file for a persistent volume.

Once you have specified the attributes, you can create this volume by running the command:

$ kubectl create -f pv-definition.yml

Persistent Volume Claims

A Persistent Volume Claim (PVC) is created by a user when they want to access shared volumes. Kubernetes binds each PVC to a Persistent Volume, based on the user request and volume’s properties. Each PVC can only be bound to one PV, so they have a 1:1 relationship. If more than one PV satisfies the requirements of a PVC, users can specify their required volumes using labels and selectors. 

Here is a YAML definition file for a persistent volume claim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata: 
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi

When a user withdraws a persistent volume claim, you can reclaim the persistent volume in one of three ways: Delete it completely, Retain it so it is present but won’t accept more claims or Recycle it so that it can serve other users.

Storage Classes

With storage classes, you can provision storage for your volumes dynamically. A storage class creates storage (for instance on Google Cloud), then automatically avails it to pods as needed. You can create different classes to serve different storage and replication needs. The YAML definition file for a storage class (google-storage) to handle the volumes created earlier is outlined below:

There are different storage services availed by various providers. Following are some of the most popular storage services you may encounter:

  • AWSElasticBlockStore
  • AzureFile
  • AzureDisk
  • CephFS
  • Cinder
  • FC
  • FlexVolume
  • Flocker
  • GCEPersistentDisk
  • Glusterfs
  • iSCSI
  • Quobyte
  • NFS
  • RBD
  • VsphereVolume
  • PortworxVolume
  • ScaleIO
  • StorageOS

ConfigMaps

When you have numerous pod definition files, you can use Kubernetes’ ConfigMaps to manage environmental data centrally. ConfigMaps allow you to parse environment data in the form of key-value pairs. To set up ConfigMaps on your pod, you need to follow two steps:

  1. Create a ConfigMap
  2. Inject the map into a pod

To create a ConfigMap imperatively, use the command:

$ kubectl create configmap

To add the key-value pairs to your ConfigMap, run the command:

$ kubectl create configmap\ app-config --from-literal=APP_COLOR=blue

Note that the above key-value pairs uses the following configuration appended to the actual command – 
<config-name> –from-literal=<key>=<value>

You can also use a file to add key-value pairs using the command in the format:
kubectl create configmap <config-name> –from-file=<path-to-file>

Actual command –

$ kubectl create configmap app-config --from-file=app_config.properties

To create a ConfigMap using the declarative command:

$ kubectl create -f config-map.yaml

To get configuration data from a file:

$ kubectl get -f config-map.yaml

To view your configuration maps, type in the command:

$ kubectl get configmaps

To describe configuration maps:

$ kubectl describe configmaps

To run a configuration map within a pod:

$ kubectl create -f pod-definition.yaml

You could inject this configuration data as a single environment variable, or as a file within a volume

Secrets

While secrets also store environment data, they are used to protect sensitive information by storing them in a hashed format. To activate a secret in your pod, you’ll need two steps:

  1. Create a secret
  2. Inject it into a pod

To create a secret imperatively, use the command:

$ kubectl create secret generic

Specify key-value pairs directly on the command line using the following format: 

  • kubectl create secret generic
    • <secret-name> –from-literal=<key>=<value>
$ kubectl create secret generic app-secret --from-literal=DB_Host=mys

Add other variables to your secrets as shown below:

$ kubectl create secret generic app-secret1 --from-literal=DB_Host=mys --from-literal=DB_User=root  --from-literal=DB_Password=paswrd

To specify a secret from file, use the command:

  • kubectl create secret generic
    • <secret-name> –from-file=<path-to-file>
$ kubectl create secret generic \
 app-secret --from-file=app_secret.properties

You will need to encode the content into base64 string. To do this on a Linux command line, use the script:

$ echo -n 'elementname' | base64 

*elementname is the value you want to encode

Write a Secret config file that looks like this:

  • apiVersion: v1
  • kind: Secret
  • metadata:
  •   name: mysecret
  • type: Opaque
  • data:
  •   username: ZWxlbWVudG5hbWU=

To create a secret declaratively, use the command:

$ kubectl create -f secret-data.yaml

To view your secrets, use the command:

$ kubectl get secrets

To view your encoded values, run the command:

$ kubectl get secret app-secret -o yaml

To decode your hashed values, use the command:

$ echo -n 'hashedvalue' | base64 --decode

You can also inject secrets as single environment variables or as files in a volume.  

Network Policies

In this section of the course curriculum, you can test your knowledge of various commands and configuration settings to use when creating and managing network policies.

This section also tests your ability to:

  • Identify the impact of rules configured on a policy
  • Access the user interface of applications from the terminal
  • Perform connectivity tests using these UIs
  • Create a specific network policy using Kubernetes documentation

Some common network policy actions and their commands include:

To see the number and names of active network policies:

$ kubectl get netpol

To get the pod onto which a policy is applied:

$ kubectl get pods -l name=podselector-name

To understand the type of traffic that a policy should handle:

$ kubectl describe netpol policy-name

Research Questions & Conclusion

This concludes the Kubernetes curriculum for the DCA certification exam. To test your knowledge, it is strongly recommended that you access research questions of all core concepts covered in the coursework and a test to prepare you for the exam. You can also send your feedback to the course developers, whether you have feedback or would like something changed within the course. 

Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back. 

Quick Tip – Questions below may include a mix of DOMC and MCQ types.

1. Which of the following are correct commands to create config maps? Select all the answers that apply.

[A] kubectl create configmap CONFIGMAP-NAME --from-literal=KEY1=VALUE1 --from-literal=KEY2=VALUE2

[B] kubectl create configmap CONFIGMAP-NAME --from-file=/tmp/env

[C] kubectl create configmap CONFIGMAP-NAME --file=/tmp/env

[D]  kubectl create configmap CONFIGMAP-NAME --literal=KEY1=VALUE1 KEY2=VALUE2

2. Where do you configure the configMapKeyRef in a pod to use environment variables defined in a ConfigMap?

[A] spec.containers.env

[B] spec.env.valueFrom

[C] spec.containers.valueFrom

[D] spec.containers.env.valueFrom

3. Which statements best describe Kubernetes secrets?

[A] Kubernetes secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys.

[B] Storing confidential information in a Secret is safer.

[C] Secrets may be created by Users or the System itself.

[D] It is a best practice to check-in secrets into source code repositories.

4. Inspect the below pod-definition file and answer the following questions.

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers: 
   - name: nginx-container
     image: nginx
   - name: agent
     image: agent

How many IP addresses are consumed by the pod when it’s created?

[A] 1

[B] 2

[C] 3

[D] 4

5. Where do you specify image names in a pod definition YAML file to be deployed on Kubernetes?

[A] containers.image

[B] spec.containers.image

[C] template.containers.image

[D] kind.containers.image

6. How do you inject configmap into a pod in Kubernetes?

[A] Using envFrom and configMapRef

[B] Using env and configMapRef

[C] Using envFrom and configMap

[D] Using env and configMap

7. Refer to the below specification and identify which of the statements are true?

ports:
        - containerPort: 80
      - name: logger
        image: log-agent:1.2
      - name: monitor
        image: monitor-agent:1.0

[A] This is an invalid configuration because the selector matchLabel nginx does not match the label web set on the deployment

[B] This is an invalid configuration because there are more than 1 containers configured in the template

[C] This is an invalid configuration because the selector field must come under the template section and not directly under spec

[D] This is an invalid configuration because the API version is not set correctly

[E] This is a valid configuration

8. Which of the following are valid service types in Kubernetes?

[A] NodePort

[B] ClusterIP

[C] LoadBalancer

[D] ExternalName

[E] ElasticLoadBalancer

9. What are the 4 top level fields a Kubernetes definition file for POD contains?

[A] apiVersion

[B] templates

[C] metadata

[D] labels

[E] kind

[F] spec

[G] namespaces

[H] containers

10. Which of the below statements are correct?

apiVersion: v1
kind: Service
metadata:
  name: web-service
  labels:
    obj: web-service
    app: web
spec:
  selector:
    app: web
  type: NodePort
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
      nodePort: 39376

[A] Traffic to port 39376 on the node hosting the pod in the cluster is routed to port 9376 on a POD with the label app web on the same node 

[B] Traffic to port 39376 on all nodes in the cluster is routed to port 9376 on a random POD with the label app web

[C] Traffic to port 80 on the service is routed to port 9376 on a random POD with the label app web

[D] Traffic to port 80 on the node is routed to port 9376 on the service

11. What is the default range of ports that Kubernetes uses for NodePort if one is not specified?

[A] 32767-64000

[B] 30000-32767

[C] 32000-32767

[D] 80-8080

12. Which among the following statements are true without any change made to the default behaviour of network policies in the namespace?

[A] As soon as a network policy is associated with a POD traffic between all PODs in the namespace is denied

[B] As soon as a network policy is associated with a POD all ingress and egress traffic to that POD are denied except allowed by the network policy

[C] As soon as a network policy is associated with a POD all ingress and egress traffic to that POD are allowed except for the the ones blocked by the network policy

[D] A group of teams that share a specific set of permissions

13. What is the command to delete the persistent volumes?

[A] kubectl delete pv PV-NAME

[B] kubectl del pv PV-NAME

[C] kubectl rm pv PV-NAME

[D] kubectl erase pv PV-NAME

14. What command would you use to create a Deployment? Select the correct answer.

[A] kubectl get deployments

[B] kubectl get nodes

[C] kubectl create

[D] kubectl run

15. Regarding the following YAML, What should we do to correct the syntax errors?

apiVersion: v1/apps
  kind: Pods
  metadata:
    name: apache 
     labels:
      app: myapp
  spec:
   containers: 
   - name: apache
     image: httpd

[A] We need to use apiVersion as v1 but not v1/apps

[B] kind should be Pod but not Pods

[C] containers should be container

[D] labels keyword should be inline with name under metadata

16. Which statement best describes the Worker Node component of Kubernetes?

[A] kubelet and container runtime is the worker node components

[B] kube-proxy is one of the worker node component

[C] kube-scheduler is one of the worker node component

By properly following this study guide till this part of the series, you have prepared yourself to handle all Kubernetes related tasks for working on Docker – and are of course a step closer to pass the DCA certification test. On KodeKloud, you also get a learning path with recommendations, sample questions and tips for clearing the DCA exam. 

Related Articles

Docker Certified Associate Exam Series (Part-2): Container Orchestration

Introduction to Container Orchestration An essential part of preparing for the Docker Certified Associate (DCA) exam is to familiarize yourself with Container Orchestration. Container Orchestration requires a set of tools and scripts that you can use to host, configure, and manage containers in a production environment.  Deploying in Docker typically involves running various applications on…

Responses

Your email address will not be published. Required fields are marked *

Secured By miniOrange