Introduction to Big Data Formats: Understanding Avro, Parquet and ORC
If you’re not a database expert, the choices and nuances of big data formats can be overwhelming. Increasingly, analysts, data scientists, engineers and business users need to know these formats in order to make decisions and understand workflows.
What readers can expect from this paper:
- The analyst and data scientist may gain more insight into why different formats emerged, and some of the trade-offs required when choosing a format
- The data engineer might better understand the evolution of data formats and ideal use cases for each type
- The business user will be able to understand why their analysts and engineers may prefer certain formats—and what “Avro,” “Parquet,” and “ORC” mean!
HOW TO CHOOSE THE RIGHT DATA FORMAT
Before we dive into the ins and outs of different big data formats, it is helpful to establish a framework to use when evaluating them. In this paper, we will use a framework that takes into account the considerations engineers and analysts might have when selecting a format and helps to introduce basic concepts for non-technical readers. It is not meant to be comprehensive and indeed, your specific use case might need to consider other variables.
At its core, this evaluation framework has four key considerations: row or column, schema management, splitability, and compression. Let’s explain each of these in turn and how Avro, Parquet, and ORC rank for each one.
CONSIDERATION ONE: ROW VS. COLUMN
Perhaps the most important consideration when selecting a big data format is whether a row or column-based format is best suited to your objectives. At the highest level, column-based storage is most useful when performing analytics queries that require only a subset of columns examined over very large data sets. If your queries require access to all or most of the columns of each row of data, row-based storage will be better suited to your needs. Let’s examine these two types.
To help illustrate the differences between row and column-based data, consider this table of basic transaction data. For each transaction, we have the customer name, the product ID, sale amount, and the date.
Let’s first consider the case where the transaction data is stored in a row-based format. In this format, every row in the set has all the columns contained in the data’s schema. Row-based storage is the simplest form of data table and is used in many applications, from web log files to highly-structured database systems like MySql and Oracle.
In a database, this data would be stored by row, as follows:
Emma,Prod1,100.00,2018-04-02;Liam,Prod2,79.99,2018- 04-02;Noah,Prod3,19.99,2018-04-01;Oliv- ia,Prod2,79.99,2018-04-03
To process this data, a computer would read this data from left to right, starting at the first row and then read each subsequent row.
Storing data in this format is ideal when you need to access one or more entries and all or many columns for each entry. For example, let’s say you’re presenting customer transaction history to an account manager. The account manager needs to view all the records from her clients (e.g., the four transactions shown above), and many columns (e.g., the customer, product, price, and date columns above). That strongly suggests using row-based storage. Row-based data is most useful when you want to use many of the fields associated with an entry— and you need to access many entries.
Column-based data formats, as you might imagine, store data by column. Using our transaction data as an example, in a columnar database this data would be stored as follows:
Emma,Liam,Noah,Olivia;Prod1,Prod2,Prod3;Pr od2;100.00,79.99,19.99,79.99;2018-04-02,2018-04-02, 2018-04-01, 2018-04-03
In columnar formats, data is stored sequentially by column, from top to bottom—not by row, left to right. Having data grouped by column makes it more efficient to easily focus computation on specific columns of data. Reading only relevant columns of data saves compute costs as irrelevant columns are ignored. Having the data stored sequentially by column allows for a faster scan of the data because all relevant values are stored next to each other. There is no need to search for values within the rows. Column-based storage is also ideal for sparse data sets where you may have empty values.
Continuing with our transaction data, let’s imagine a company with thousands of transactions. What is the easiest way to find the highest value sale by date? It’s really easy to nd the answer if you think about it from a columnar perspective—you just need to know the “Sale Amount” and “Transaction Date.” If you can fetch those two columns, you can perform the operation and get the answer.
If you were to do this row-wise, then the database would have to fetch all the rows and with each row, all the columns. It would unnecessarily incur the overhead of fetching columns that were not needed for the final result. When you need to analyze select columns in the data, columnar becomes the clear choice.
To gain a comprehensive introduction to Avro, Parquet, and ORC, download the 12-page Introduction to Big Data Formats whitepaper. After reading the paper, you will understand:
- Why different formats emerged, and some of the trade-offs required when choosing a format
- The evolution of data formats and ideal use cases for each type
- Why analysts and engineers may prefer certain formats—and what “Avro,” “Parquet,” and “ORC” mean!
- The challenges involved in converting formats and how to overcome them
Plus, learn a flexible framework to help you evaluate which format is right for you.