Software Alternatives & Reviews

Apache Flink VS Apache Druid

Compare Apache Flink VS Apache Druid and see what are their differences

Apache Flink logo Apache Flink

Flink is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations.

Apache Druid logo Apache Druid

Fast column-oriented distributed data store
  • Apache Flink Landing page
    Landing page //
    2023-10-03
  • Apache Druid Landing page
    Landing page //
    2023-10-07

Apache Flink videos

GOTO 2019 • Introduction to Stateful Stream Processing with Apache Flink • Robert Metzger

More videos:

  • Tutorial - Apache Flink Tutorial | Flink vs Spark | Real Time Analytics Using Flink | Apache Flink Training
  • Tutorial - How to build a modern stream processor: The science behind Apache Flink - Stefan Richter

Apache Druid videos

An introduction to Apache Druid

More videos:

  • Review - Building a Real-Time Analytics Stack with Apache Kafka and Apache Druid

Category Popularity

0-100% (relative to Apache Flink and Apache Druid)
Big Data
74 74%
26% 26
Databases
50 50%
50% 50
Stream Processing
100 100%
0% 0
Relational Databases
0 0%
100% 100

User comments

Share your experience with using Apache Flink and Apache Druid. For example, how are they different and which one is better?
Log in or Post with

Reviews

These are some of the external sources and on-site user reviews we've used to compare Apache Flink and Apache Druid

Apache Flink Reviews

We have no reviews of Apache Flink yet.
Be the first one to post

Apache Druid Reviews

Rockset, ClickHouse, Apache Druid, or Apache Pinot? Which is the best database for customer-facing analytics?
“When you're dealing with highly concurrent environments, you really need an architecture that’s designed for that CPU efficiency to get the most performance out of the smallest hardware footprint—which is another reason why folks like to use Apache Druid,” says David Wang, VP of Product and Corporate Marketing at Imply. (Imply offers Druid as a service.)
Source: embeddable.com
Apache Druid vs. Time-Series Databases
Druid is a real-time analytics database that not only incorporates architecture designs from TSDBs such as time-based partitioning and fast aggregation, but also includes ideas from search systems and data warehouses, making it a great fit for all types of event-driven data. Druid is fundamentally an OLAP engine at heart, albeit one designed for more modern, event-driven...
Source: imply.io

Social recommendations and mentions

Based on our record, Apache Flink should be more popular than Apache Druid. It has been mentiond 27 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.

Apache Flink mentions (27)

  • Top 10 Common Data Engineers and Scientists Pain Points in 2024
    Data scientists often prefer Python for its simplicity and powerful libraries like Pandas or SciPy. However, many real-time data processing tools are Java-based. Take the example of Kafka, Flink, or Spark streaming. While these tools have their Python API/wrapper libraries, they introduce increased latency, and data scientists need to manage dependencies for both Python and JVM environments. For example,... - Source: dev.to / 17 days ago
  • Choosing Between a Streaming Database and a Stream Processing Framework in Python
    Other stream processing engines (such as Flink and Spark Streaming) provide SQL interfaces too, but the key difference is a streaming database has its storage. Stream processing engines require a dedicated database to store input and output data. On the other hand, streaming databases utilize cloud-native storage to maintain materialized views and states, allowing data replication and independent storage scaling. - Source: dev.to / 3 months ago
  • Go concurrency simplified. Part 4: Post office as a data pipeline
    Also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc. - Source: dev.to / 4 months ago
  • Five Apache projects you probably didn't know about
    Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features. - Source: dev.to / 4 months ago
  • Getting Started with Flink SQL, Apache Iceberg and DynamoDB Catalog
    Due to the technology transformation we want to do recently, we started to investigate Apache Iceberg. In addition, the data processing engine we use in house is Apache Flink, so it's only fair to look for an experimental environment that integrates Flink and Iceberg. - Source: dev.to / 4 months ago
View more

Apache Druid mentions (9)

  • How to choose the right type of database
    Apache Druid: Focused on real-time analytics and interactive queries on large datasets. Druid is well-suited for high-performance applications in user-facing analytics, network monitoring, and business intelligence. - Source: dev.to / about 2 months ago
  • Choosing Between a Streaming Database and a Stream Processing Framework in Python
    Online analytical processing (OLAP) databases like Apache Druid, Apache Pinot, and ClickHouse shine in addressing user-initiated analytical queries. You might write a query to analyze historical data to find the most-clicked products over the past month efficiently using OLAP databases. When contrasting with streaming databases, they may not be optimized for incremental computation, leading to challenges in... - Source: dev.to / 3 months ago
  • Analysing Github Stars - Extracting and analyzing data from Github using Apache NiFi®, Apache Kafka® and Apache Druid®
    Spencer Kimball (now CEO at CockroachDB) wrote an interesting article on this topic in 2021 where they created spencerkimball/stargazers based on a Python script. So I started thinking: could I create a data pipeline using Nifi and Kafka (two OSS tools often used with Druid) to get the API data into Druid - and then use SQL to do the analytics? The answer was yes! And I have documented the outcome below. Here’s... - Source: dev.to / over 1 year ago
  • Apache Druid® - an enterprise architect's overview
    Apache Druid is part of the modern data architecture. It uses a special data format designed for analytical workloads, using extreme parallelisation to get data in and get data out. A shared-nothing, microservices architecture helps you to build highly-available, extreme scale analytics features into your applications. - Source: dev.to / over 1 year ago
  • Druids by Datadog
    Datadog's product is a bit too close to Apache Druid to have named their design system so similarly. From https://druid.apache.org/ : > Druid unlocks new types of queries and workflows for clickstream, APM, supply chain, network telemetry, digital marketing, risk/fraud, and many other types of data. Druid is purpose built for rapid, ad-hoc queries on both real-time and historical data. - Source: Hacker News / over 1 year ago
View more

What are some alternatives?

When comparing Apache Flink and Apache Druid, you can also consider the following products

Apache Spark - Apache Spark is an engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing.

Amazon Kinesis - Amazon Kinesis services make it easy to work with real-time streaming data in the AWS cloud.

Apache Hive - Apache Hive data warehouse software facilitates querying and managing large datasets residing in distributed storage.

Spring Framework - The Spring Framework provides a comprehensive programming and configuration model for modern Java-based enterprise applications - on any kind of deployment platform.

Apache Kylin - OLAP Engine for Big Data

Spark Mail - Spark helps you take your inbox under control. Instantly see what’s important and quickly clean up the rest. Spark for Teams allows you to create, discuss, and share email with your colleagues