Category Archives: HBase

Selecting the right hardware for a Hadoop cluster

I’m summarizing this article. For specifics (such as how to configure split machines across racks to better configure the network switches) see the article. None of this content is operating system or hardware vendor specific, but generally the discussions assume Linux.

Goal is to minimize data movement and process on the same machine that stores the data. Therefore each machine in the cluster needs appropriate CPU and disk. Problem is that when building the cluster the nature of the queries and the resulting bottlenecks may not yet be known. If a business is building its first Hadoop cluster, it may not yet fully understand the types of business problems that will eventually be solved by it. That’s in contrast to a business deploying it’s Nth Oracle server.

Types of bottlenecks:

  1. IO: reading from disk (or a network location)
    • indexing
    • data import/export
    • data transformation
  2. CPU: processing the Map query
    • clustering
    • text mining
    • natural language processing

Other issues, since a cluster could eventually scale to hundreds or thousands off machines

  1. Power
  2. Cooling

The Cloudera Manager can provide realtime statistics about how a currently running MapReduce job impacts the CPU, disk, and network load.

Roles of the components within a Hadoop cluster:

  1. Name Node (and Standby Name Node): coordinating data storage on the cluster
  2. Job Tracker: coordinating data processing
  3. Task Tracker
  4. Data Node

Data Node and Task Tracker

  • The vast majority of the machines in a cluster will only peform the roles of Data Node and Task Tracker, which should not be run on the same nodes as Name and Job.
  • Other components (such as HBase) should only be run on the Data Nodes if they operate on data. You want to keep data local as much as possible. HBase needs about 16 GB Heap to avoid garbarge collection timeouts. Impala will consume up to 80% of available RAM.
  • Assumed to be lower performance machines than the Name Node and Job Tracker

Name Node and Job Tracker

  • Standby Name Node should (obviously) not be on the same machine as the Name Node.
  • Name Node (and Standby Name Node) and Job Tracker should be enterprise class machines (redundant power supplies, enterprise class raid’ed disks)
  • Name Node should have RAM in proportion to number of data blocks in the cluster. 1GB RAM for every 1 million blocks in HDFS. With 100 Data Node cluster, 64 GB RAM is fine. Since the machine’s tasks will be disk intensive, you’ll want enough RAM to minimize virtual memory swapping to disk.
  • 4 – 6 TB of disk, not raid’ed (JBOD configuration)
  • 2 CPUs (at least quad code). Recommend more CPUs and/or cores as opposed to faster CPU speed, since in a large cluster the higher speed will draw more power and generate more heat, yet not scale as well as if there were simply more CPUs or better yet nodes.

Cloudera has defined four standard configurations

  1. Light Processing (I’m not sure what the use case is for this. Prototype? Sandbox?)
  2. Balanced Compute (recommeded for your 1st cluster, since it’s not likely you’ll properly identify which configuration is best suited for your use case)
  3. Storage Heavy
  4. Compute Heavy

Source:

Apache Ambari: A suite of applications/components to provision, manage, and monitor Hadoop clusters

System Admins:

Provision

  • Wizard for installing/configuring Hadoop services across many hosts

Manage

  • Start, stop, reconfigure Hadoop across many hosts

Monitor

  • Dashboard for health & status
  • Metrics via Ganglia (Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids)
  • Alerting via Nagios

Developers:

  • Integrate provisioning, mangement, and monitoring into their own application using the Ambari REST APIs

These tools are supported by Ambari:

  • HDFS, MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig, Sqoop

Sources:

Moving data between Hadoop and relational databases

Sqoop

  • Tool for bi-directional data between Hadoop and relational database using JDBC.
  • Optimized drivers for specific database vendors are available.
  • Command line tool

Flume and FlumeNG (Next Generation)

  • Enables realtime streaming into HDFS and HBase.
  • The use case for Flume is for streaming of data, such as continual input from web server logs.

Sources:

Apache Hadoop and its components

Hadoop consists of two components

  1. MapReduce –
    • programming framework
    • Map
      • distributes work to different Hadoop nodes
    • Reduce
      • gathers results from multiple nodes and resolves them into a single value
      • the source come from HDFS, and the output is typically written back to HDFS
    • Job Tracker: manages nodes
    • Task Tracking: takes orders from Job Traker
    • MapReduce originally developed by Google.
    • Apache MapReduce is built on top of Apache YARN which is a framework for job scheduling and cluster resource management.
  2. HDFS (Hadoop Distributed File System) – file store
    • It is neither a file system nor a database, it’s neither yet it’s both.
    • Within HDFS are two components
      • Data Nodes:
        • data repository
      • Name Nodes:
        • where to find the data; maps blocks of data on slave nodes (where job and task trackers are running)
        • Open, Close, Rename files
    • On top of HDFS you can run HBase
      • Super scalable (billions of rows, millions of columns)┬árepository for key-value pairs
      • This is not a database, cannot have multiple indices