Containerize My World
The Coming Battle Between Virtual Machines and Containers
Devops Engineer Joe Schneider operates a large infrastructure of more than 400 servers at startup Bunchball, delivering “gamification” services (incentivizing communications) to employees at more than 300 corporate customers including household names like Ford, T-Mobile, and Adobe. Bunchball’s analytics engine performs billions of calculations daily. Schneider uses container technology to “package” applications running at Bunchball, and speed up the rollout of updates to the applications. Containers, he says, simplify and speed up the rollout of applications, compared with virtual machines. Virtual machines or VMs are the standard way of hosting applications in most corporate enterprises today.
“When you are running a performance-critical application, you need every piece of CPU speed you can get,” he says, “and eliminating virtual machines is huge. Containers will be a big disruption for enterprise data centers.”
VMs (also known as server virtualization) were the major infrastructure revolution of the last 15 years, used today by more than 90% of large enterprises because they allow enterprises to economize on the number of servers and gain greater flexibility on each server. Virtualization also made the cloud possible by turning compute power into a flexible, fungible resource. A growing number of software industry leaders say containers are the next major revolution in infrastructure. In many cases, containers are faster, smaller, and easier to manage than VMs. The rise of container technology poses a serious threat to the dominance of VMs, and to the revenue stream of VMware (VMW), the $7 billion Palo Alto software giant that invented virtualization and today dominates the market. However, VMware has its own intriguing plans to keep abreast, and perhaps stay one step ahead of the container wave. In this article, we take a look at the drivers for containers’ popularity, and VMware’s own plans in the area.
“In more than 20 years in this business, I’ve never seen anything like the excitement and interest I’ve seen since I joined Docker,” a Docker executive recently told us. Docker is one of the companies at the heart of the container wave. Docker’s innovation is to simplify the process of rolling out that containerized application on a server. Modifying and rolling out applications are becoming more important for a couple of reasons. First, more companies are using more software applications in more aspects of their products than ever before. Whenever an individual application gets revised or updated or replaced, it must be rolled out, and that can happen multiple times a day. Secondly, modern applications are often more complex to deploy than old-fashioned “monolithic” applications. A good example is the new wave of databases such as those using Hadoop. They run on multiple servers to “distribute” and speed up the computation. While this kind of database can be far more powerful for dealing with enormous datasets than a traditional monolithic Oracle database, managing it can be more complex.
Retailers are a good example of businesses using software ever more intensively. “Online retailers need to move quickly. If you view software as a differentiator, as a way of outfoxing the competition, then you need to embrace this technology as the standard,” says Bryan Cantrill (pictured above), CTO at Joyent, a public cloud provider whose infrastructure is based on containers. Cantrill says that by running containers directly on a server with no virtualization layer in between, applications run faster. And they consume less space: “We get around 2,000 containers per machine [server]. That’s a much higher rate of tenancy than you get with VMs on a server, which is about 100 to 200. VMs were a 10x improvement over physical machines, and containers are another order of magnitude step up.” Since Joyent can pack more customers and apps onto its infrastructure, it can reduce prices. Joyent’s prices, says Cantrill, “are competitive with AWS,” despite being far smaller than the AWS behemoth.
Another startup, Mesosphere, sells a framework for using containers it calls a “Data Center Operating System” to enterprises. Last month, Mesosphere announced new funding of $73 million from investors including Microsoft and HP Enterprise. Its customers include Verizon, Apple Siri, Samsung, and review website Yelp. Says co-founder and CTO Tobi Knaup: “We have a customer that has a real-time speech processing app that used to run on VMs and they got rid of that and now run on containers and it’s 30% faster.”
Virtual machines still have certain advantages that containers cannot match. One advantage of VMs is that many legacy applications are optimized to run best on VMs, and support services such as backup and disaster recovery are also optimized for VM-based environments. Another advantage of VMs, one of their original design objectives, was to simulate multiple operating system environments so, for example, a Windows application could run on a Linux server. Despite the cultlike mantra of container enthusiasts that “all companies are becoming software companies,” the majority of businesses, large and small, still do physical things like make, ship, or sell goods and services, and they depend on traditional applications like Microsoft Exchange, SQL Server, Oracle, SAP, and a hundred others, and those often benefit from the classic VM paradigm of multiple OS’s on one server.
The biggest drawback containers face today is inadequate security. Container technology does not isolate each containerized application securely enough from its neighbors on the same server, so there’s a concern that a rogue or infected app could impact others. At Bunchball, Joe Schneider runs his apps in containers, but those containers run within VMs, largely because of the security concerns. “Containers still have some catching up to do in terms of security,” says Knaup. “In banks or other highly regulated industries that require strong isolation between workloads, you will see VMs continue to be used for a long time to come.”
With all the startups and all the venture dollars focused on containers, it seems likely that most of these drawbacks will be overcome in the next few years. That is likely to mean a major shift in infrastructure from VMs to containers, in both public clouds and private enterprise data centers. As container technology matures, Schneider sees private data centers moving wholesale to containers. “People running their own data centers will see a huge disruption as an incredibly large percentage will have no need to run VMs in their data centers, and will instead use encapsulation and minihosts [i.e. containers].” He adds that public cloud providers will probably opt to continue to use VMs for the flexibility they provide, with one possible exception: “I could see AWS switch from VMs to containers as their base unit of computing.” AWS has traditionally driven hard for low cost rather than broad application-compatibility.
Joyent’s Cantrill forecasts that in five years, 50% of enterprise infrastructure will have turned from VM’s to containers. Mesosphere’s Knaup, who is at the more cautious end of the spectrum among software enthusiasts, forecasts that it might be 20%. “We have customers in manufacturing, finance, and media. The 2Sigma hedge fund is using containers for financial analysis. Verizon is using it for G90 [a new video streaming service to mobiles]. In five years, 20% of the VM market will be using containers,” he predicts. “Ultimately, it depends on how well companies like us can help enterprises make the migration.”
VMware Integrated Containers
If that 20% number turns out to be accurate, that’s bad news for VMware, whose core virtualization product suite, vSphere, has already started to decline in revenue, victim of its own enormous market success. But VMware has been working for over a year on its response, VMware Integrated Containers. Senior VMware Engineer Ben Corrie played a key role in getting the project off the ground and into its current prototype state. It’s not far from readiness as a product, he tells us. A VMware Integrated Container or VIC is a slimmed-down VMware virtual machine that incorporates all the key features of both VMs and containers in packages that end up only slightly larger than containers. “We’re getting all the benefits of both sides,” Corrie says. “We’re getting the packaging efficiency, the small size, and the speed of containers combined with the security and robustness of VMs. People have been shocked at how quickly we can create VMs and get rid of them.” The quickness with which containers go live—a matter of a few seconds compared to as much as a minute for VMs—is another important advantage of containers. Corrie says customers who’ve seen demos of VIC technology are impressed. “Our customers are lining up for it.”
While VMware has yet to announce any product plans for VICs, and Corrie emphasizes that he is not part of the team that makes product decisions, Corrie says that if VMware decides to productize VICs, customers would benefit from a host of related benefits, such as VICs’ ability to integrate well with VMware’s growing software platforms in storage and networking. Bunchball’s Schneider cites another ace in VMware’s hand: its skill at managing large, complex workloads. “Running 10,000 workloads and connecting them all together, that’s a hard problem, and VMware has a lot of expertise in that area of massive scale.”
Ben Corrie would like to see VMware open source the core VIC technology, as a way of spreading its popularity more widely within the software community. The original container technologies, such as Kubernetes (developed at Google) or Docker, have been maintained as open source, contributing to their widespread adoption. Alongside an open source VIC, Corrie would like to see a completely new version of vSphere built for commercial release, incorporating VIC technology. “We hope we’ll be able to provide enough benefit on the vSphere platform, that customers will choose to run it on vSphere.”
Right now, as VMware readies itself for control by Dell in a complex financial transaction, it would not want to confuse investors by talking about major technological shifts. vSphere is still a massively profitable cash machine, the major driver of VMware’s 86% gross margin in Q4 2015. And that’s the challenge. Destabilizing the goose that lays the golden eggs takes courage. Modifying and updating that goose into a new world of containers while retaining the support of customers, employees, and investors, requires great management skill.
Says Bryan Cantrill: “Containers are truly disruptive. And the challenge VMware will have is that when you benefit from one economic revolution, it’s very hard to benefit from the technology of the next revolution that obviates the previous one.”