Docker is a platform for packaging, distributing and running applications. It allows you to package your application together with its whole environment.

Docker was the first container system that made containers easily portable across different machines. It simplified the process of packaging up not only the application but also all its libraries and other dependencies, even the whole OS file system, into a simple portable package that can be used to provision the application to any other machine running Docker.

When you run an application packaged with Docker, it sees the exact filesystem contents that you’ve bundled with it. It sees the same files whether it’s running on your development machine or a production machine, even if the production server is running a completely different Linux OS. The application won’t see anything from the server it’s running on, so it doesn’t matter if the server has a completely different environment.

This is similar to creating a VM image by installing an operating system into a VM, installing the app inside it, and then distributing the whole VM image around and running it. Docker achieves the same effect, but instead of using VMs to achieve app isolation, it uses Linux container technologies to provide (almost) the same level of isolation that VMs do. Instead of using big monolithic VM images, it uses container images, which are usually smaller.

A big difference between Docker-based container images and VM images is that container images are composed of layers, which can be shared and reused across multiple images. This means only those layers of an image need to be downloaded which haven’t been downloaded previously by other containers.

Core concepts


A Docker-based container image is a representation of your application together with its environment. It contains the filesystem that will be available to the application and other metadata, such as the path to the executable that is called when the image is run.


A Docker registry is a repository that stores your Docker images and facilitates easy sharing of those images.


A Docker-based container is a regular Linux container created from a Docker-based container image. A running container is a process running on the host running Docker but it’s completely isolated from both the host and all other processes running on it. The process is also resource-constrained and can only access and use the resources allocated to it.

Virtual machines vs Docker Containers

In the chapter Containers, we saw how regular Linux containers compare with VMs. Let’s see how Docker containers specifically compare to virtual machines.

If each container has its own isolated filesystem, how can both app A and app B share the same files?

Docker images are composed of layers. Different images can contain the exact same layers because every Docker image is built on top of another image and two different images can both use the same parent image as their base. This speeds up distribution of images across the network and also reduces the storage footprint of images. Each layer is only stored once.

Two containers created from two images based on the same base layers can therefore read the same files, but if one of them writes over those files, the other one doesn’t see those changes. Therefore, even if they share files, they are still isolated from each other. This works because container image layers are read-only. When a container is run, a new writable layer is created on top of the layers in the image. When the process in the container writes to a file located in one of the underlying layers, a copy of the whole file is created in the top-most layer and the process writes to that copy.

Portability limitations of Docker

In theory, a container image can be run on any Linux machine running Docker, but one small caveat exists. All containers running on a host use the host’s Linux kernel. If a containerised application requires a specific kernel version, it may not work on every machine. VMs have no such constraints because each VM runs its own kernel.

Additionally, a containerised app built for specific hardware architecture can only run on other machines that have the same architecture. You can’t containerise an application built for x86 and expect it to run on an ARM machine. You would need a VM for that.


After the success of Docker, the Open Container Initiative (OCI) was born to create open industry standards around container formats and runtime. Docker is part of that initiative, as is rkt (pronounced “rock-it”), which is another Linux container engine.

Like Docker, rkt is a platform for running containers. It puts a strong emphasis on security, composability and conforming to open standards. It uses the OCI container image format and can even run regular Docker container images.