In the past few years, containers have become a hot topic among not just developers, but also enterprises. This growing interest has caused an increased need for security improvements and hardening, and preparing for scaleability and interoperability. This has necessitated a lot of engineering, and here’s the story of how much of that engineering has happened at an enterprise level at Red Hat.
When I first met up with representatives from Docker Inc. (Docker.io) in the fall of 2013, we were looking at how to make Red Hat Enterprise Linux (RHEL) use Docker containers. (Part of the Docker project has since been rebranded as Moby.) We had several problems getting this technology into RHEL. The first big hurdle was getting a supported Copy On Write (COW) file system to handle container image layering. Red Hat ended up contributing a few COW implementations, including Device Mapper, btrfs, and the first version of OverlayFS. For RHEL, we defaulted to Device Mapper, although we are getting a lot closer on OverlayFS support.
The next major hurdle was on the tooling to launch the container. At that time, upstream Docker was using LXCtools for launching containers, and we did not want to support LXC tools set in RHEL. Prior to working with upstream Docker, I had been working with the libvirt team on a tool called virt-sandbox, which used libvirt-lxc for launching containers.
At the time, some people at Red Hat thought swapping out the LXC tools and adding a bridge so the Docker daemon would communicate with libvirt using libvirt-lxc to launch containers was a good idea. There were serious concerns with this approach. Consider the following example of starting a container with the Docker client (docker-cli) and the layers of calls before the container process (pid1OfContainer) is started: