HortonWorks developed solutions to add into Hive the ability to update multiple records as a single transaction following the ACID model. Part of the complexity of transactional updates is that the data must be written to all applicable nodes before the transaction can be considered complete. The naming convention within HDFS folders includes a transaction ID so that both committed and uncommitted files persist until all portions of the transaction have been completed. Because the transaction ID is included, any read operations that occur before the transaction has completed will access the old data.
Why go through all of this work to add an ACID model to Hive rather than just use HBase, which already supports transactions. The primary reason is that HBase only supports Consistency at the level of a single row update, rather than with a larger set of operations. Without Consistency, there is no ACID. HortonWorks lists a few other reasons, but I’m discounting them because they are general reasons why they prefer Hive over HBase.
Discussion thread on LinkedIn Group:
Benchmarking and Stress Testing an Hadoop Cluster With TeraSort, TestDFSIO & Co.
Project Gutenberg (approximately 30,000 books)
Wikipedia (full download)
Datasets available through Amazon, such as the Human Genome Project and US Census Database
Hadoop consists of two components
- MapReduce –
- programming framework
- distributes work to different Hadoop nodes
- gathers results from multiple nodes and resolves them into a single value
- the source come from HDFS, and the output is typically written back to HDFS
- Job Tracker: manages nodes
- Task Tracking: takes orders from Job Traker
- MapReduce originally developed by Google.
- Apache MapReduce is built on top of Apache YARN which is a framework for job scheduling and cluster resource management.
- HDFS (Hadoop Distributed File System) – file store
- It is neither a file system nor a database, it’s neither yet it’s both.
- Within HDFS are two components
- Data Nodes:
- Name Nodes:
- where to find the data; maps blocks of data on slave nodes (where job and task trackers are running)
- Open, Close, Rename files
- On top of HDFS you can run HBase
- Super scalable (billions of rows, millions of columns) repository for key-value pairs
- This is not a database, cannot have multiple indices