Category Archives: Yarn

Hadoop job scheduling that takes network bandwidth into account

A research paper from Cornell University discusses scheduling Hadoop jobs based upon an analysis of available network bandwidth. Typically a Hadoop cluster only considers server node availability when scheduling. Software Defined Networking (SDN) is assumed. SDN is a new front in virtualization technology and critical for dynamic scaling of clouds.

Source:

Fast Search and Analytics on Hortonworks with Elasticsearch

Elasticworks enables real-time searching and analytics. Yarn is supported. Integration extends into Hive and Pig.

Source:

Yarn provides greater scalability than MapReduce

MapReduce could have resource utilization problems because an arbitrary process could have allocated all map slots, while some reduce slots are empty (and vise-versa). Yarn (Yet Another Resource Negotiator) splits the JobTracker into a global ResourceManager (RM) and a per-application ApplicationMaster (AM) which works with the per-machine NodeManagers to execute and monitor tasks.  The ResourceManager has a Scheduler which only schedules (does not monitor).

The ApplicationMaster is far more scalable in 2.0 than 1.0. HortonWorks has successful simulated a 10k node cluster. This is possible due to the ApplicationMaster not being global to the entire cluster but rather has an instance per application so is no longer a bottleneck through which all applications must pass.

The ResourceManager is also more scalable in 2.0 since it’s scope is reduced to scheduling and no longer is responsible for fault tolerance of the entire cluster.

Source:

How to Plan and Configure YARN

Good (and very short) article about configuring Yarn, specifically allocating CPUs and RAM for the OS, JVMs, and MapReduce

Source:

What benefit does Yarn bring to the existing MapReduce?

Within the classic MapReduce is the Job Tracker component. Yarn splits Job Tracker into two further components: Resource Manager (aka RM) (allocating cpu, ram, etc) and Node Manager (aka NM) (which operates at the level of a single node/machine). The Application Manager (aka AsM) negotiates resources from the Resource Manger and with the Node Manager to execute tasks. Job Tracker is already an ancient architecture — five years old!!

Yarn is sometimes referred to as MapReduce 2.0 or MRv2.

Resource Manager supports hierarchical application queues to guarantee allocation ratios of cluster resources. However, it does not enable recovery from application or hardware failures. It does not monitor. It only schedules. Scheduling methods include FIFO (default) and Capacity. Fair is not currently supported.

ZookKeeper monitors Resource Manager in order to switch to a secondary if Resource Manager itself fails. In a failover scenario, running applications are restarted and the queue continues. Preservation of state within currently running applications is handled by checkpoints stored by the Application Master within HDFS.

Rather than having specific containers to execute Map jobs and Reduce jobs, Yarn enables containers for more generic jobs, which enables developers to write other applications that run on the cluster.

It’s unclear whether Yarn will make the system run faster or slower. Generalization and modularization usually comes at a cost. However, Yarn allows for more complete utilization of CPU and RAM resources so in theory can squeeze every last bit of capacity out of a cluster, whereas the fixed size containers in MapReduce 1.0 could have left some resources idle. Yarn does not mange I/O which is typically a bigger bottleneck than RAM. There’s also no management of network bandwidth in Yarn. (Note to self, got to figure this out: I saw another article that says that Yarn does manage cpu, disk and network, yet didn’t mention RAM).

Another benefit of a more modularized architecture is that it makes the system easier to maintain. Any updates to MapReduce 1.0 requires the replacement of a pretty big chunk of software. Being able to run multiple versions of MapReduce within a cluster of thousands of nodes is important. Significant downtime would otherwise be required for upgrades.

Source:

Twitter creates Hadoop hybrid system to mitigate tradeoffs between batch and stream processing

Storm is an open sourced system (from Twitter) that processes streams of big data in realtime (but without 100% guaranteed accuracy), making it the opposite of Hadoop which processes a repository of big data in batch.

Twitter has needs for both streaming and batch, so created an open sourced hybrid system called Summingbird. It does what Storm does, then uses Hadoop for error correction.

Twitter’s use cases include updating timelines and trending topics in real time, but then making sure that the analytics are accurate.

Yahoo’s contribution to this effort was to enable Storm to be configured using Yarn.

Source:

Using Yarn to monitor resources and provision capacity in order to run other applications alongside MapReduce

Hadoop 2.0 enables clusters to grow as large as 4000 nodes within deployments that contain multiple clusters. I think that companies like Google and Facebook each run tens of thousands of nodes.

Using Yarn, developers can run additional applications within the cluster by monitoring what the applications need, and then creating CPU/RAM containers within the cluster (and across clusters?) to run them.

There’s speculation that eventually Yarn could provide a PaaS using Hadoop in order to compete with VMWare’s Cloud Foundry. I suppose that while with VMWare you need to first think in terms of virtualizing hardware components and an operating system, Yarn jumps past that to provide an environment that’s abstracted for a specific application.

Source: