Apache Hadoop is an interesting platform for building distributed system for big data. It uses data replication across hosts and racks of hosts to protect against individual disk, host, and even rack failures. Given the built-in reliability and workload consolidation features of Hadoop it might appear there is little need to virtualize it. However, there are several use-cases that make virtualization of this workload compelling:
- Enhanced availability with capabilities like VMware High Availability and Fault Tolerance.
- Easier deployment with vSphere tools or Serengeti, leading to easier and faster datacenter management.
- Sharing resources with other Hadoop clusters or completely different applications, for better datacenter utilization.
In addition, virtualization enables new ways of integrating Hadoop workloads into the datacenter:
- Elasticity enables the ability to quickly grow a cluster as needs warrant, and to shrink it just as quickly in order to release resources to other applications.
- Multi-tenancy allows multiple virtual clusters to share a physical cluster while maintaining the highest levels of isolation between them.
- Greater security within each cluster as well as elasticity (the ability to quickly resize a cluster) require the separation of the computational (TaskTracker) and data (DataNode) parts of Hadoop into separate machines. However, data locality (and thus performance) requires them to be on the same physical host, leading to the use of virtual machines.
So, if you are interested in virtualizing a Hadoop environment, check out the whitepaper from VMware to get best practices for configuration, performance, and availability.