In 2014, Brandon Butler—the Senior Research Analyst at IDC—published an article asking whether cloud containers were just a new buzzword or game-changing technology. The timing of the article was significant, as it was published twelve months after Docker v1.0 was first released and just as Microsoft announced its Kubernetes Visualizer tool for Azure.
The following year, Datadog—the monitoring service for cloud-scale applications—reported a 500 percent increase in businesses using the Docker platform to deploy and orchestrate cloud containers. Another 30 percent growth was recorded in 2016, with a further 40 percent of businesses adopting cloud containers in 2017—notably led by businesses deploying more than 500 hosts.
Yet, according to Gartner, the container ecosystem is still in its infancy. The IT research company predicts that “By 2020, more than 50 percent of global organizations will be running containerized applications in production, up from less than 20 percent today”—implying that cloud containers are more than just a new buzzword and are definitely game-changing technology that are here to stay.
If you are new to the world of cloud containers, a good way to explain them is they are little packages containing an application, along with just the settings and storage needed to run the application. They can be spawned in seconds, and run on top of a single VM of the host operating system—using a fraction of the footprint that a comparable VM running the same application would leave.
Consequently they are less expensive to deploy than VMs; and although cloud containers are designed to run a single, isolated application or service, they can be clustered together and easily scaled. They are more portable than VMs—thus ideal for hybrid and multicloud environments—and, as cloud containers have the appearance of an individual system, each can have its own sysadmin and user group.
In addition to being less expensive, faster to deploy and more scalable than VMs, there are further advantages of cloud containers that contribute to them being so popular:
Naturally, different businesses will have different motives for deploying cloud containers, and each will benefit in different ways. However, there is no disputing that just the advantages mentioned above will save time and money for any business with a presence in the cloud. Unfortunately, the saying that every cloud has a silver lining is not necessarily true when it comes to cloud containers.
The race to move IT infrastructure to the public cloud at the start of the decade demonstrates how cloud costs can spiral out of control. Developers - used to the “sunk costs” of on-premises data centers - were accustomed to deploying resources with wiggle room for future expansion, and not switching them off. As a result, VMs were frequently overprovisioned and left running when not required.
Although individual cloud containers can be provisioned more accurately than individual VMs, many developers are unsure about what resources to deploy when spinning up clusters of cloud containers. Furthermore, there are many, many more cloud containers and clusters being deployed than VMs. The consequence is the risk of over provisioning still exists - potentially on a larger scale than before.
The issue of container costs potentially spiraling out of control is compounded by the fact that many cloud monitoring solutions are host-centric, rather than service- or role-centric. So, whereas a business may be using a cost optimization solution to control its VM deployments, the solution will be ineffective at monitoring the order-of-magnitude increase in container deployments.
Potentially a more serious issue is container security. Cloud containers do not have the same security boundaries as VMs and, if a hacker identifies a vulnerability in a kernel shared by multiple containers, he or she can exploit the vulnerability to gain access to protected resources (“privilege escalation”), take down the operating system by instigating “kernel panic”, or execute a Denial-of-Service attack.
Also due to the lack of robust security boundaries, any hacker finding a weak spot in the underlying operating system can gain access to the containers in order to deploy malware. With many teams using shared infrastructures, containers frequently crossing infrastructure boundaries due to their portability, and business tending towards hybrid environments, a successful malware attack can spread like wildfire.
Another way in which a container can be infected with malware is if a developer clones a malicious (“poisoned”) or vulnerable container image from the Docker repository. IT security experts believe other flaws in Docker and similar platforms could be exploited for the deployment of malware; which could not only infect the container, but also host machines and other VMs running on the system.
In early 2018, a team of IT security experts carried out a honeypot experiment to assess the likelihood of a cyber-attack. Within two days of deploying a container with an enabled HTTP port, hackers had found the unsecured container and were caught trying to download cryptocurrency mining software. Three days later, news emerged that containers deployed on AWS by Tesla had been “cryptojacked” and used to mine cryptocurrency. The breach also exposed proprietary data belonging to the electric carmaker.
Are you ready for containers?