It was a chilly winter morning over 10 years ago in Somers, New York. The small upstate town is dominated by the sprawling buildings of IBM corporate headquarters. For a few of us, the hard work of the past several years was going to come to a head as we walked into the campus' main building with a couple of bulky servers. They had a self contained setup of the system that we were going to demo to IBM's senior executive team. Ironically, the technology in those boxes could remove the need to carry preconfigured physical hardware by making them readily available on-demand. That was also precisely the reason for IBM's interest in our small upstart, Meiosys. IBM was rapidly executing on its on-demand strategy. HP called it utility computing and Sun Microsystems called it N1 but they were all precursors to what we call cloud today.
The demo entailed repeated live migration of a containerized three-tier application based on IBM WebSphere and DB2 back and forth between two servers while it was under load from the TPC-C benchmark. The technology was advanced. It not only virtualized the application using a fundamentally efficient abstraction but also supported sophisticated features such as live-migration and coordinated checkpoint-restore of distributed applications. The application, including all its state was wrapped in a thin layer of virtualization that we called Container.
Based on my work at Columbia University and Hewlett Packard Labs in early 2000s, the Meiosys system systematically decoupled the application from the underlying platform by encapsulating its state into a self-contained unit. The state of each application resource, including the state of its runtime memory and the state of its open socket connections, was captured and live-migrated to the target machine. Of the millions of connections made by the benchmark, not one dropped connection! It was an awesome feat of research and engineering. Following its acquisition by IBM in 2005, Meiosys' technology became the basis for containers on Linux and IBM AIX operating systems.
The technology was sophisticated and robust but it required custom changes to the kernel that spanned almost every subsystem. Implementing the changes in AIX was relatively quick. However, getting the changes merged into mainline Linux kernel was a rather long and deliberate process. Even though a fully functioning patch was available, it was too big. It had to be washed, dried and chopped into bite size pieces for community consumption. After over a decade of community effort, most changes made their way into the kernel. However, the changes were now implemented as independent features in various kernel subsystems. The freshly minted features were not easy to configure and use them together. Although many companies began experimenting internally and embedding them into their products, out-of-the-box user space support to tie them all into easily deployable containers was inadequate.
This was a clear opportunity for DotCloud, one of the early users of containers that keenly understood the value of the technology. With the center of gravity of virtual machine vendors shifting away from core virtualization to storage and higher layers of the management stack, the conditions were ripe for containers. DotCloud created the Docker open source project with the explicit charter to hide the complexity of the kernel features through a simple user interface combined with a repository of container images that can be readily deployed. To the developers used to virtual machine based packaging, it was like magic to see dispensable OS environments come up and go instantly. Docker was an immediate success.
Much has been written about Docker containers since. Docker's model quickly became popular. What has been a simmering activity in niche kernel circles for over a decade has quite suddenly become the active focus of companies like Google, HP, IBM and Microsoft. Some are touting containers as the "next generation of virtualization" and "technology of the decade." Containers are almost becoming synonymous to Docker. Few technologies have seen this kind of adoption rate.
Technologies rarely move so quickly from conception to viral adoption. While Docker and its usage model has been the first highly visible introduction of containers to the industry, the underlying technology itself is deep and broad. Containers are just beginning to transition out of their long incubation and they are still in an early stage of adoption. Many new interesting usage models and capabilities are still to emerge.
This is an excerpt from the book Containers Beyond the Hype (ISBN 9780997023619, December 2015)