Tag Archives: spark.incubator.apache.org

What benefit does Yarn bring to the existing MapReduce?

Within the classic MapReduce is the Job Tracker component. Yarn splits Job Tracker into two further components: Resource Manager (aka RM) (allocating cpu, ram, etc) and Node Manager (aka NM) (which operates at the level of a single node/machine). The Application Manager (aka AsM) negotiates resources from the Resource Manger and with the Node Manager to execute tasks. Job Tracker is already an ancient architecture — five years old!!

Yarn is sometimes referred to as MapReduce 2.0 or MRv2.

Resource Manager supports hierarchical application queues to guarantee allocation ratios of cluster resources. However, it does not enable recovery from application or hardware failures. It does not monitor. It only schedules. Scheduling methods include FIFO (default) and Capacity. Fair is not currently supported.

ZookKeeper monitors Resource Manager in order to switch to a secondary if Resource Manager itself fails. In a failover scenario, running applications are restarted and the queue continues. Preservation of state within currently running applications is handled by checkpoints stored by the Application Master within HDFS.

Rather than having specific containers to execute Map jobs and Reduce jobs, Yarn enables containers for more generic jobs, which enables developers to write other applications that run on the cluster.

It’s unclear whether Yarn will make the system run faster or slower. Generalization and modularization usually comes at a cost. However, Yarn allows for more complete utilization of CPU and RAM resources so in theory can squeeze every last bit of capacity out of a cluster, whereas the fixed size containers in MapReduce 1.0 could have left some resources idle. Yarn does not mange I/O which is typically a bigger bottleneck than RAM. There’s also no management of network bandwidth in Yarn. (Note to self, got to figure this out: I saw another article that says that Yarn does manage cpu, disk and network, yet didn’t mention RAM).

Another benefit of a more modularized architecture is that it makes the system easier to maintain. Any updates to MapReduce 1.0 requires the replacement of a pretty big chunk of software. Being able to run multiple versions of MapReduce within a cluster of thousands of nodes is important. Significant downtime would otherwise be required for upgrades.

Source: