Jump to content

Apache ORC

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Peter Alibaba (talk | contribs) at 22:03, 28 June 2020 (a paper of ORC is added as a reference.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Apache ORC
Initial release20 February 2013; 12 years ago (2013-02-20)[1]
Stable release
1.6.0 / 3 September 2019; 5 years ago (2019-09-03)[2]
RepositoryORC Repository
Operating systemCross-platform
TypeDatabase management system
LicenseApache License 2.0
Websiteorc.apache.org

Apache ORC (Optimized Row Columnar) is a free and open-source column-oriented data storage format of the Apache Hadoop ecosystem. It is similar to the other columnar-storage file formats available in the Hadoop ecosystem such as RCFile and Parquet. It is compatible with most of the data processing frameworks in the Hadoop environment.

In February 2013, the Optimized Row Columnar (ORC) file format was announced by Hortonworks in collaboration with Facebook.[3] A month later, the Apache Parquet format was announced, developed by Cloudera and Twitter.[4]

Comparison

Apache ORC is comparable to RCFile and Parquet file formats---all three fall under the category of columnar data storage within the Hadoop ecosystem. They all have better compression and encoding with improved read performance at the cost of slower writes.

Based on Hadoop computing environments, a comprehensive study to understand and compare several table storage formats is conducted and the results are published in a paper in VLDB 2013[5]. The basic structure of a table storage format consists of three core operations: row reordering, table partitioning, and data packing. This study also gives the rationale of ORC design with performance comparisons based on the setups for the core operations.

References

  1. ^ "The Stinger Initiative: Making Apache Hive 100 Times Faster". Retrieved Jan 1, 2019.
  2. ^ "Releases".
  3. ^ Alan Gates (February 20, 2013). "The Stinger Initiative: Making Apache Hive 100 Times Faster". Hortonworks blog. Retrieved Dec 31, 2018.
  4. ^ Justin Kestelyn (March 13, 2013). "Introducing Parquet: Efficient Columnar Storage for Apache Hadoop". Cloudera blog. Archived from the original on September 19, 2016. Retrieved May 4, 2017.
  5. ^ Yin Huai, Siyuan Ma, Rubao Lee, Owen O'Malley, Xiaodong Zhang, "Understanding insights into the basic structure and essential issues of table placement methods in clusters". Proceedings of the VLDB Endowment, Vol. 6, No. 14, 2013.