Docker Certified Associate Exam Series (Part -7): Docker Engine Security
Docker Daemon Security
Security within the IT landscape is as critical as maintaining a strong armed-force for a country. To ensure that the IT landscape of an organization is secured from external threats, organizations lay heightened emphasis to avoid security attacks that can potentially bring down business operations within days.
Without ensuring your Docker Daemon is secure, the underlying operations and business functions are always vulnerable. Someone with access to your Docker Daemon can compromise security by:
- Deleting existing containers that run your applications.
- Deleting volumes that contain crucial data.
- Misusing containers to host their own applications (eg., bitcoin mining)
- Running privileged containers that grant them root access to your resources.
- Targeting your IT Network or other connected systems within the network.
The first step to ensuring security within your Docker platform involves using Docker best practices to keep your host secure. These include:
- Disabling password-based authentication
- Enabling SSH-key based authentication
- Determining access rights and privileges for users
- Disabling all unused ports
- When you are enabling access to your Daemon from an external host, make sure you expose your ports only to private interfaces within your organization.
- To enable secure communication between hosts, it is best to use TLS certificates. To do so, you can configure a certificate authority CA Server (cacert.pem) and create CA certificates for your server (server.pem and serverkey.pem). Then configure the daemon to read these certificates along with setting the “tls” option to “true”. The configuration file will be as shown:
{
"hosts": ["tcp://192.168.1.10:2376"]
"tls": true,
"tlscert": "/var/docker/server.pem"
"tlskey": "/var/docker/serverkey.pem"
}
Note that the default container port is 2376, which enforces encrypted communication. We can now allow users to access the Docker API by having the DOCKER_HOST
environment variable pointing to the host:
$ export DOCKER_HOST="tcp://192.168.1.10:2376"
Now set the DOCKER_TLS environment variable to “true” to initiate a secure connection.
$ export DOCKER_TLS="true"
This sets up encrypted communication between the client and the Docker server. This communication however lacks authentication, as anyone who knows the exposed container port can still access the daemon by setting the DOCKER_TLS
environment variable to “true” and pointing the DOCKER_HOST
environment variable to the host.
To enable certificate-based authentication, copy the certificate authority (cacert) to the daemon’s server-side, then configure the TLS certificate parameter to true
in the daemon’s JSON file as follows:
{
"hosts": ["tcp://192.168.1.10:2376"]
"tls": true,
"tlscert": "/var/docker/server.pem"
"tlskey": "/var/docker/serverkey.pem"
"tlsverify": true,
"tlscert": "/var/docker/caserver.pem"
}
Here the tlsverify
variable enables authentication, while the tlscert
variable will verify client certificates.
We will then generate client certificates signed by the certificate authority. To do so, generate client certificates (client.pem and clientkey.pem), then share these securely with the client server.
On the client side, activate TLS verification:
$ export DOCKER_TLS_VERIFY="true"
Then pass this in client certificates through the command line or by dropping the certificate into the .docker
directory within the user’s home directory.
$ docker --tlscert=<> --tlskey=<> --tlscert=<> ps
The Docker client will automatically pick up client certificates when we run the above command. Now, only clients with certificates signed by the CA server can access your Docker Daemon.
Namespaces and Capabilities
In Docker, we use namespaces to isolate workspaces. Process IDs, Networks, Mounts, Linux Timeshares, and InterProcess Communications all reside in different namespaces. Every process has a Process Identifier (PID) attached to it. When you boot up Linux, the first process to come on (PID: 1) is known as the root process. Every process can have several PIDs, depending on the namespace and container runtime.
Docker also assigns different privileges to different users. There are two main types of users in Docker: root and non-root. Root users have administrative privileges and can create, manage and delete containers; while Non-Root users are those without super privileges of a Root user.
To set a user ID through the Docker Command Line Interface:
$ docker run -d --name test --user=1000 ubuntu sleep 1000
$ docker exec -it test /bin/bash
$ ps -aux
You can also specify the user ID in the image’s Dockerfile:
FROM ubuntu
USER 1000
Then build this image by running the command:
$ docker build -t my-ubuntu-image
Root user privileges only apply to users on the docker host. Container root users typically have limited capabilities.
Linux Capabilities
Capabilities outline the roles and privileges of various users in a system. The root user is the system’s most powerful user. Root users and processes have unrestricted system access, including, –
- controlling
- creating
- killing and managing containers
- setting IDs
- network operations
- system operations and many more.
The full list of user capabilities can be accessed on the location /usr/include/linux/capability.h.
To check the capabilities of a normal container running, run the following commands in order:
$ docker run -d --name test --user=1000 ubuntu sleep 1000
$ docker exec -it test /bin/bash
$ apt update -y apt ;install libcap-ng-utilst
$ pscap
For interacting with the network stack, instead of using --privileged
you should use --cap-add=NET_ADMIN
to modify the network interfaces:
$ docker run -d --name test1 --cap-add MAC_ADMIN ubuntu sleep 1000
$ docker exec -it test /bin/bash
$ apt update -y apt install libcap-ng-utilst
$ pscap
CGroups
Control Groups (CGroups) allow the allocation and distribution of resources among different processes and containers.
Resource Limits
All docker processes run on the host’s kernel, and they share kernel resources with other processes. From the host’s point of view, containers are also processes. By default, a container is allowed access to unlimited resources within the host. If necessary, a container could utilize all of a host’s resources, depriving other processes of the same capabilities. In this case, the Docker Engine will start to kill other processes in order to free up resources. In extreme circumstances, the engine could kill native host processes to preserve system resources.
CPU
When two running processes need the same CPU, each process gets an equal share of CPU time. These processes don’t run concurrently. Instead, they take turns using the CPU, although these changes happen in microseconds, making it appear as though the processes run simultaneously. You can also allocate more CPU time to high-priority applications through CPU timeshares.
In this case, if process A gets a 1024 allocation while process B gets a 512 allocation, it will use the CPU twice as much but with the same computing power.
Docker uses Fair Schedulers to enforce CPU timeshares. Two of the most common schedulers in Docker are the Completely Fair Scheduler and the Realtime Scheduler. To allocate 512 CPU shares for a container, run the command:
$ docker container run --cpu-shares=512 nginx
You can limit CPU usage by defining the CPUs that a particular process can consume. We do this by specifying the CPU Sets. The format for defining a CPU set is:
$ docker container run --cpuset-cpus=0-1 webapp
This command ensures that the process uses the first two CPUs in an array within the host. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
Additionally, you can also limit the number of CPUs a process can use by specifying the CPU Count using the command:
$ docker container run --cpus=2.5 nginx
With the above command, If the host machine has three CPUs and you set –cpus=”2.5″, the container is guaranteed at most two and a half of the CPUs.
To add or reduce the CPU count, you can update it with the command:
$ docker container update --cpus=2.5 container-name
Without the proper mechanisms to limit CPU usage, a process/container could end up consuming too much CPU power, taking up most or all of a host’s resources, making that server unresponsive.
Memory
Every system consists of Physical Memory known as Random Access Memory (RAM). A process can consume as much RAM as is available within the host, unless we enforce some limits. When all system RAM has been consumed, Linux will use SWAP space configured on the host for memory allocation. SWAP space is space allocated on physical storage disks that can be used as memory. If a process uses up all RAM and SWAP memory, it is killed using an Out of Memory (OOM) exception.
You can specify the memory limit to be consumed by an application by including it in the command as shown:
$ docker container run -d --name webapp --memory=512m nginx
You can also specify the SWAP space limit as follows:
$ docker container run --memory=512m --memory-swap=512m webapp
In this case, you will have 0 MB SWAP memory since the allocation is usually the difference between the two figures. To allocate 256 MB to the SWAP memory, we’ll run the command:
$ docker container run --memory=512m --memory-swap=768m webapp
If --memory-swap
is set to a positive integer, then both --memory
and --memory-swap
must be set.
To display a live stream of container(s) resource usage statistics:
$ docker stats
Research Questions & Conclusion
This concludes the Docker Engine Security chapter of the DCA certification exam. To test your knowledge, it is strongly recommended that you access research questions of all core concepts covered in the coursework and a test to prepare you for the exam. You can also send your feedback to the course developers, whether you have feedback or would like something changed within the course.
Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back.
Quick Tip – Questions below may include a mix of DOMC and MCQ types.
1. What is a Linux feature that prevents a process within the container from performing filesystem related operations such as altering attributes of certain files?
[A] Control Groups (CGroups)
[B] Namespaces
[C] Kernel Capabilities
[D] Network Namespaces
2. What flags are used to configure encryption on docker daemon without any authentication?
[A] tlsverify, tlscert, tlskey
[B] key, cert, tls
[C] tls, tlscert, tlskey
[D] host, key, cert, tls
3. What will happen if the <code>–memory-swap</code> is set to 0?
[A] the container does not have access to swap
[B] the container is allowed to use unlimited swap
[C] the setting is ignored, and the value is treated as unset
4. By default, all containers get the same share of CPU cycles. How to modify the shares?
[A] docker container run --cpu-shares=512 webapp
[B] docker container run --cpuset-cups=512 webapp
[C] docker container run --cpu-quota=512 webapp
[D] docker container run --cpus=512 webapp
5. Limit the container webapp to only use the first CPU or core. Select the right command
[A] docker container run --cpuset-shares=1 webapp
[B] docker container run --cpus=0 webapp
[C] docker container run --cpuset-cpus=0 webapp
6. Assume that you have 1 CPU, which of the following commands guarantees the container at most 50% of the CPU every second?
[A] docker run -it --cpu-shares=512 ubuntu /bin/bash
[B] docker container run --cpuset-cups=.5 webapp
[C] docker run -it --cpus=".5" ubuntu /bin/bash
[D] docker run -it --cpus=".5" --cpuset-cups=1 ubuntu /bin/bash
7. What is a linux feature that allows isolation of containers from the Docker host?
[A] Control Groups (CGroups)
[B] Namespaces
[C] Kernel Capabilities
[D] LXC
8. By default, a container has no resource constraints.
[A] true
[B] false
By following this study guide till this part of the series, you have prepared yourself to handle all Docker Engine Security questions and practical scenarios – and are of course a step closer to pass the DCA certification test.
On KodeKloud, you also get a learning path with recommendations, sample questions and tips for clearing the DCA exam. Once you are done with this section, you can head over to the research questions and practice test sections to examine your understanding of Docker Engine Architecture, Setup and Configuration.
Responses