Docker: the future, or just another buzzword?

What is Docker?
Docker is developed by Docker, Inc, the corporate entity behind the Docker project. As described on the website, Docker is an “open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, datacenter VMs, or the cloud.” It allows you to package an application with all of its dependencies into a standardized unit called a container. A container includes everything it needs to run: code, system libraries, tools, and other dependencies.
Docker allows you to break your monolithic application into a many “small apps” with reduced complexity and put them into separate containers. This way many small teams might work on their own app, using the best technology for their task (you are not bound to the same technology as the entire application). Moreover, it has great synergy with the whole concept of microservices.
Also, you can build your application in a container that runs on your laptop or on a virtual machine, in the cloud or anywhere you want, and be assured that it will work the same way on each environment.
Containers are an old idea.
Containers, and specifically Linux containers (LXC), are not new. They are derived from the Linux kernel feature called control groups (originally cgroups), developed by Google engineers. LXC combines cgroups and support for isolated namespaces to provide an isolated environment for applications. It’s a very low level, operating system-level feature for virtualization. It offers an environment as close as possible to a virtual machine but without the overhead that comes with simulating all the hardware. Linux containers have been used by big companies, such as Oracle, HP, and IBM for years. Docker originally was based on LXC, but now it uses the libcontainer library as its own way of using Linux kernel virtualization features.
Difference between containers and virtual machines.
You might ask what is the difference between virtual machines and containers?
It’s all about scope. Virtual machines are heavy, get more isolation, and are more secure. On the other hand, containers are lightweight and fast but provide slightly less isolation. Virtual machines emulate a whole machine, whereas containers share a lot of the host operating system resources. You are able to run a few virtual machines at the same time, whereas hundreds of containers running concurrently won’t be a problem at all. Containers are very light and fast. Delightfully so.
Obviously, neither of the two is better. It really depends on what you are trying to achieve. If you need a full-fledged virtual “machine” that is capable of running many operating systems, like Windows, Solaris, and Linux, to name a few, or any recent operating system release, then choose virtual machine technologies. However, if you only need a software capable of isolated single processes or groups of them within the same operating system, then you might be interested in container technologies, especially in Docker. It’s very important to understand the scope of these various technologies. Remember to use the right tool for the job.
Docker use cases
I’ve been experimenting with container technologies for a few months and I can simply say that there are many solid use cases for them, like:
- containers as a very fast in-memory database for test-driven development
- containers as lightweight mock dependencies in integration tests (Redis, ØMQ, etc)
- simplifying your development and testing environments. No more “runs on my machine” syndrome.
- natural fit for the microservices pattern
There are also a lot of cool projects built upon containers:
- Dokku. Do you want to build your own mini-heroku clone?
- Deis. Open source PaaS.
- Tutum. Build, deploy, and manage your apps across any cloud.
- CoreOS and its tools- etcd, fleet, etc.
- Shippable. Continuous Development with no DevOps code.
- Serf. Decentralized solution for cluster membership, failure detection, and orchestration
- Kubernetes. Manage a cluster of containers as a single system.
- Quay. Think “Github for containers.”
- tmpnb. Creates temporary Jupyter Notebooks servers using Docker containers.
- a lot of other tools…
Should I use it right away?
Well, the answer is simply no. Before using it, you have to very carefully think about your current architecture and check if Docker will fit into it. It’s not a magic unicorn nor a magic wand that’s going to solve all of your problems. However, it might help, if used correctly. It can simplify things, but on the another hand, it adds another layer of abstraction. Before dropping your current setups and diving into Docker I encourage you to check out the official docker website and read through it.
Container technology has become a very hot topic these days and it’s going to get even more attention. Many companies all around the world, including leading tech giants like Amazon and Google, are adopting and adding a container service to their platforms. It can greatly simplify the process of developing applications. Developers can focus on resolving problems that really matter to them and stop worrying about low-level details. The IT world becomes a connected net of very specific APIs that each resolves a complex problem. It evolves into a cloud-based service that does one job, and does it very well.
Docker popularized a way of thinking in terms of virtualization. It’s a great tool that has its uses. I think that containers, virtual machines and all other virtualization technologies are the way of the future.
Automation is the future.
See you in the cloud!