- Encode the relationships between objects in tables, and use keys to link the tables together
- Standard query language (with emphasis on standard, applying to all database vendors, versions, implementations, programmers) relies on the relationship encoding and vendor architecture for optimization/efficiency
- Algorithms rely on a single pass execution, using operations such as Joins and Group Bys and Counts.
Big Data World
- Based on linear algebra and probability theory
- Encode objects using a property list
- Data stored as a matrix, similar to relational tables, except that the intersection of multiple matrices does not imply relationships
- Algorithms have iterative solutions with multiple steps each of which store results that are used as input by the next step, which is very inefficient to execute in SQL
- Indices are not needed, since massively scaled hardware will be used to process the entire data set by brute force or by intelligent jobs (on the front side in Map or the back side in Reduce).
Either you structure your data ahead of time so that SQL algorithms will work, or you break down your algorithms in to algebra (MapReduce jobs) in order to process semi-structured data.
Where does this leave systems like Hive, that enable programmers to write something that looks like SQL and is transformed on the backend into MapReduce jobs? Maybe purists don’t like Hive because it’s used by people on the fence between Database and Big Data, instead of those who have fully converted to Big Data?
Systems similar yet different from Hadoop/MapReduce. They claim to be Big Data, but have roots in the database world.
- Twitter’s Storm/Summingbird is event driven (not batch) so can target real time applications
- Spark uses iterative algorithms and in-memory processing with the goal of being a few orders of magnitude faster than MapReduce