Software Alternatives & Reviews

Apache Parquet VS Presto DB

Compare Apache Parquet VS Presto DB and see what are their differences

Apache Parquet logo Apache Parquet

Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem.

Presto DB logo Presto DB

Distributed SQL Query Engine for Big Data (by Facebook)
  • Apache Parquet Landing page
    Landing page //
    2022-06-17
  • Presto DB Landing page
    Landing page //
    2023-03-18

Category Popularity

0-100% (relative to Apache Parquet and Presto DB)
Databases
53 53%
47% 47
Data Dashboard
6 6%
94% 94
Big Data
100 100%
0% 0
Database Tools
0 0%
100% 100

User comments

Share your experience with using Apache Parquet and Presto DB. For example, how are they different and which one is better?
Log in or Post with

Social recommendations and mentions

Based on our record, Apache Parquet should be more popular than Presto DB. It has been mentiond 19 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.

Apache Parquet mentions (19)

  • [D] Is there other better data format for LLM to generate structured data?
    The Apache Spark / Databricks community prefers Apache parquet or Linux Fundation's delta.io over json. Source: 5 months ago
  • Demystifying Apache Arrow
    Apache Parquet (Parquet for short), which nowadays is an industry standard to store columnar data on disk. It compress the data with high efficiency and provides fast read and write speeds. As written in the Arrow documentation, "Arrow is an ideal in-memory transport layer for data that is being read or written with Parquet files". - Source: dev.to / 12 months ago
  • Parquet: more than just "Turbo CSV"
    Googling that suggests this page: https://parquet.apache.org/. Source: about 1 year ago
  • Beginner question about transformation
    You should also consider distribution of data because in a company that has machine learning workflows, the same data may need to go through different workflows using different technologies and stored in something other than a data warehouse, e.g. Feature engineering in Spark and loaded/stored in binary format such as Parquet in a data lake/object store. Source: about 1 year ago
  • Pandas Free Online Tutorial In Python — Learn Pandas Basics In 5 Lessons!
    This section will teach you how to read and write data to and from a variety of file types, including CSV, Excel, SQL, HTML, Parquet, JSON etc. You’ll also learn how to manipulate data from other sources, such as databases and web sites. Source: about 1 year ago
View more

Presto DB mentions (6)

  • Parsing logs from multiple data sources with Ahana and Cube
    Presto is an open-source distributed SQL query engine, originally developed at Facebook, now hosted under the Linux Foundation. It connects to multiple databases or other data sources (for example, Amazon S3). We can use a Presto cluster as a single compute engine for an entire data lake. - Source: dev.to / almost 2 years ago
  • Can a data warehouse be skipped?
    Fair point, but I am talking about Athena (not SQL Server), which under the hood uses a distributed query engine. It is capable to deal with huge amounts of data, if the storage is in the right shape. You can read more about the underlying technology here: https://prestodb.io/. Source: about 2 years ago
  • why use Redshift if we can use S3 to store data and can connect with Quicksight for dashboarding?
    So there is Presto, which is a distributed SQL engine created by Facebook. Source: about 2 years ago
  • Understanding AWS Athena 101
    You can use Athena to run data analytics, with just standard SQL (Presto). - Source: dev.to / over 2 years ago
  • ETL tool for query building across multiple databases in Mongo DB
    Presto does this, but I'm honestly uncertain how performant it is. In my experience, centralizing data is the superior approach to attempting to query multiple sources in place. Source: almost 3 years ago
View more

What are some alternatives?

When comparing Apache Parquet and Presto DB, you can also consider the following products

Apache ORC - Apache ORC is a columnar storage for Hadoop workloads.

Looker - Looker makes it easy for analysts to create and curate custom data experiences—so everyone in the business can explore the data that matters to them, in the context that makes it truly meaningful.

Apache Spark - Apache Spark is an engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing.

Google BigQuery - A fully managed data warehouse for large-scale data analytics.

Apache Kudu - Apache Kudu is Hadoop's storage layer to enable fast analytics on fast data.

Jupyter - Project Jupyter exists to develop open-source software, open-standards, and services for interactive computing across dozens of programming languages. Ready to get started? Try it in your browser Install the Notebook.