For some businesses, container optimization doesn’t have the same priority as VM optimization. However, a significant amount of cloud waste can be attributed to underutilized and unused containers, while the lack of visibility that complicates container optimization can also mask security issues.
Containers are one of the biggest growth areas in cloud computing, with some industry observers forecasting triple-digit growth this year. The reasons for the growth, research suggests, is that businesses are moving away from VMs in favor of containers because of their efficiency and portability, which enables quicker innovation and faster product delivery. Containers are also less resource-intensive than VMs, so they are therefore relatively inexpensive to run.
Nonetheless, just because containers are cheaper than VMs, it doesn’t mean container optimization can be put to one side. Although cloud containers have fewer cost overheads than VMs, they’re just as vulnerable to underutilization and uncontrolled proliferation as VMs; and, if the current rate of growth continues, cloud waste attributable to the lack of container optimization could outstrip cloud waste attributable to other services.
Underutilized and unused containers can easily drive up cloud costs
The underutilization of containers is most often due to developers having to specify the CPU capacity and memory limits for each container—much like when provisioning VMs. Determining these values can be difficult when visibility into actual utilization is limited—an issue caused by many cloud monitoring solutions being host-centric rather than service-centric or role-centric. The lack of visibility into container deployment can also result in unused containers driving up cloud costs while sitting idle.
As with the provisioning of VMs, developers tend to err on the side of caution and provision containers for the anticipated peak demand—a situation that can be multiplied many times over when the values originate from a frequently-used template. The issue can be compounded by the fact that, even when individual containers are optimized, the optimal configuration for an individual container may not be what’s best for all the containers in a cluster—resulting in low cluster utilization.
The lack of visibility can also mask security issues
The lack of visibility into container deployments not only complicates container optimization but can also result in security issues remaining unidentified. Cloud containers don’t have the same built-in security boundaries as VMs; and, if a hacker finds a vulnerability in the underlying operating system, he or she can access the containers running on the operating system in order to deploy malware or cryptocurrency mining software.
Other potential vulnerabilities can be exploited at the kernel level, or originate if a developer clones a poisoned container image from the Docker repository. Although recently introduced cloud services such as AWS Fargate contain mechanisms to prevent some security issues, these mechanisms aren’t always used to their full potential. Clearly, the failure to maximize the use of security safeguards in the cloud isn’t exclusive to compute and storage services!
Overcome container optimization and security issues with CloudHealth
The CloudHealth cloud management platform includes a “Container Module” that provides role-centric visibility over all of a business’s container deployments. The module can be configured to report on resource utilization, allocation, and spend in order to help businesses understand more about how their containers are being utilized and how they can better control container costs—thus adding predictability to cloud cost bills. To find out more about CloudHealth’s capabilities for container optimization, don’t hesitate to contact us and request a free demo of CloudHealth in action.