Software Alternatives & Reviews

Apache Spark VS Apache Druid

Compare Apache Spark VS Apache Druid and see what are their differences

Apache Spark logo Apache Spark

Apache Spark is an engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing.

Apache Druid logo Apache Druid

Fast column-oriented distributed data store
  • Apache Spark Landing page
    Landing page //
    2021-12-31
  • Apache Druid Landing page
    Landing page //
    2023-10-07

Apache Spark

Categories
  • Databases
  • Big Data
  • Big Data Analytics
  • Big Data Infrastructure
Website spark.apache.org
Details $

Apache Druid

Categories
  • Databases
  • Big Data
  • Data Analysis
  • Big Data Analytics
Website druid.apache.org
Details $

Apache Spark videos

Weekly Apache Spark live Code Review -- look at StringIndexer multi-col (Scala) & Python testing

More videos:

  • Review - What's New in Apache Spark 3.0.0
  • Review - Apache Spark for Data Engineering and Analysis - Overview

Apache Druid videos

An introduction to Apache Druid

More videos:

  • Review - Building a Real-Time Analytics Stack with Apache Kafka and Apache Druid

Category Popularity

0-100% (relative to Apache Spark and Apache Druid)
Databases
78 78%
22% 22
Big Data
81 81%
19% 19
Stream Processing
100 100%
0% 0
Relational Databases
0 0%
100% 100

User comments

Share your experience with using Apache Spark and Apache Druid. For example, how are they different and which one is better?
Log in or Post with

Reviews

These are some of the external sources and on-site user reviews we've used to compare Apache Spark and Apache Druid

Apache Spark Reviews

15 data science tools to consider using in 2021
Apache Spark is an open source data processing and analytics engine that can handle large amounts of data -- upward of several petabytes, according to proponents. Spark's ability to rapidly process data has fueled significant growth in the use of the platform since it was created in 2009, helping to make the Spark project one of the largest open source communities among big...
Top 15 Kafka Alternatives Popular In 2021
Apache Spark is a well-known, general-purpose, open-source analytics engine for large-scale, core data processing. It is known for its high-performance quality for data processing – batch and streaming with the help of its DAG scheduler, query optimizer, and engine. Data streams are processed in real-time and hence it is quite fast and efficient. Its machine learning...
5 Best-Performing Tools that Build Real-Time Data Pipeline
Apache Spark is an open-source and flexible in-memory framework which serves as an alternative to map-reduce for handling batch, real-time analytics and data processing workloads. It provides native bindings for the Java, Scala, Python, and R programming languages, and supports SQL, streaming data, machine learning and graph processing. From its beginning in the AMPLab at...

Apache Druid Reviews

Rockset, ClickHouse, Apache Druid, or Apache Pinot? Which is the best database for customer-facing analytics?
“When you're dealing with highly concurrent environments, you really need an architecture that’s designed for that CPU efficiency to get the most performance out of the smallest hardware footprint—which is another reason why folks like to use Apache Druid,” says David Wang, VP of Product and Corporate Marketing at Imply. (Imply offers Druid as a service.)
Source: embeddable.com
Apache Druid vs. Time-Series Databases
Druid is a real-time analytics database that not only incorporates architecture designs from TSDBs such as time-based partitioning and fast aggregation, but also includes ideas from search systems and data warehouses, making it a great fit for all types of event-driven data. Druid is fundamentally an OLAP engine at heart, albeit one designed for more modern, event-driven...
Source: imply.io

Social recommendations and mentions

Based on our record, Apache Spark should be more popular than Apache Druid. It has been mentiond 56 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.

Apache Spark mentions (56)

  • Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
    Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉. - Source: dev.to / about 2 months ago
  • 🦿🛴Smarcity garbage reporting automation w/ ollama
    Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system. - Source: dev.to / 3 months ago
  • Go concurrency simplified. Part 4: Post office as a data pipeline
    Also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc. - Source: dev.to / 4 months ago
  • Five Apache projects you probably didn't know about
    Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features. - Source: dev.to / 4 months ago
  • Spark – A micro framework for creating web applications in Kotlin and Java
    A JVM based framework named "Spark", when https://spark.apache.org exists? - Source: Hacker News / 10 months ago
View more

Apache Druid mentions (9)

  • How to choose the right type of database
    Apache Druid: Focused on real-time analytics and interactive queries on large datasets. Druid is well-suited for high-performance applications in user-facing analytics, network monitoring, and business intelligence. - Source: dev.to / about 2 months ago
  • Choosing Between a Streaming Database and a Stream Processing Framework in Python
    Online analytical processing (OLAP) databases like Apache Druid, Apache Pinot, and ClickHouse shine in addressing user-initiated analytical queries. You might write a query to analyze historical data to find the most-clicked products over the past month efficiently using OLAP databases. When contrasting with streaming databases, they may not be optimized for incremental computation, leading to challenges in... - Source: dev.to / 2 months ago
  • Analysing Github Stars - Extracting and analyzing data from Github using Apache NiFi®, Apache Kafka® and Apache Druid®
    Spencer Kimball (now CEO at CockroachDB) wrote an interesting article on this topic in 2021 where they created spencerkimball/stargazers based on a Python script. So I started thinking: could I create a data pipeline using Nifi and Kafka (two OSS tools often used with Druid) to get the API data into Druid - and then use SQL to do the analytics? The answer was yes! And I have documented the outcome below. Here’s... - Source: dev.to / over 1 year ago
  • Apache Druid® - an enterprise architect's overview
    Apache Druid is part of the modern data architecture. It uses a special data format designed for analytical workloads, using extreme parallelisation to get data in and get data out. A shared-nothing, microservices architecture helps you to build highly-available, extreme scale analytics features into your applications. - Source: dev.to / over 1 year ago
  • Druids by Datadog
    Datadog's product is a bit too close to Apache Druid to have named their design system so similarly. From https://druid.apache.org/ : > Druid unlocks new types of queries and workflows for clickstream, APM, supply chain, network telemetry, digital marketing, risk/fraud, and many other types of data. Druid is purpose built for rapid, ad-hoc queries on both real-time and historical data. - Source: Hacker News / over 1 year ago
View more

What are some alternatives?

When comparing Apache Spark and Apache Druid, you can also consider the following products

Apache Flink - Flink is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations.

Apache Hive - Apache Hive data warehouse software facilitates querying and managing large datasets residing in distributed storage.

Apache Airflow - Airflow is a platform to programmaticaly author, schedule and monitor data pipelines.

Hadoop - Open-source software for reliable, scalable, distributed computing

Apache Kylin - OLAP Engine for Big Data

Apache Kafka - Apache Kafka is an open-source message broker project developed by the Apache Software Foundation written in Scala.