Software Alternatives & Reviews

Apache Beam VS Spark Streaming

Compare Apache Beam VS Spark Streaming and see what are their differences

Apache Beam logo Apache Beam

Apache Beam provides an advanced unified programming model to implement batch and streaming data processing jobs.

Spark Streaming logo Spark Streaming

Spark Streaming makes it easy to build scalable and fault-tolerant streaming applications.
  • Apache Beam Landing page
    Landing page //
    2022-03-31
  • Spark Streaming Landing page
    Landing page //
    2022-01-10

Apache Beam videos

How to Write Batch or Streaming Data Pipelines with Apache Beam in 15 mins with James Malone

More videos:

  • Review - Best practices towards a production-ready pipeline with Apache Beam
  • Review - Streaming data into Apache Beam with Kafka

Spark Streaming videos

Spark Streaming Vs Kafka Streams || Which is The Best for Stream Processing?

More videos:

  • Tutorial - Spark Streaming Vs Structured Streaming Comparison | Big Data Hadoop Tutorial

Category Popularity

0-100% (relative to Apache Beam and Spark Streaming)
Big Data
56 56%
44% 44
Stream Processing
0 0%
100% 100
Data Dashboard
100 100%
0% 0
Data Management
0 0%
100% 100

User comments

Share your experience with using Apache Beam and Spark Streaming. For example, how are they different and which one is better?
Log in or Post with

Social recommendations and mentions

Based on our record, Apache Beam should be more popular than Spark Streaming. It has been mentiond 14 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.

Apache Beam mentions (14)

  • Ask HN: Does (or why does) anyone use MapReduce anymore?
    The "streaming systems" book answers your question and more: https://www.oreilly.com/library/view/streaming-systems/9781491983867/. It gives you a history of how batch processing started with MapReduce, and how attempts at scaling by moving towards streaming systems gave us all the subsequent frameworks (Spark, Beam, etc.). As for the framework called MapReduce, it isn't used much, but its descendant... - Source: Hacker News / 3 months ago
  • How do Streaming Aggregation Pipelines work?
    Apache Beam is one of many tools that you can use. Source: 5 months ago
  • Real Time Data Infra Stack
    Apache Beam: Streaming framework which can be run on several runner such as Apache Flink and GCP Dataflow. - Source: dev.to / over 1 year ago
  • Google Cloud Reference
    Apache Beam: Batch/streaming data processing 🔗Link. - Source: dev.to / over 1 year ago
  • Composer out of resources - "INFO Task exited with return code Negsignal.SIGKILL"
    What you are looking for is Dataflow. It can be a bit tricky to wrap your head around at first, but I highly suggest leaning into this technology for most of your data engineering needs. It's based on the open source Apache Beam framework that originated at Google. We use an internal version of this system at Google for virtually all of our pipeline tasks, from a few GB, to Exabyte scale systems -- it can do it all. Source: over 1 year ago
View more

Spark Streaming mentions (3)

  • Choosing Between a Streaming Database and a Stream Processing Framework in Python
    Other stream processing engines (such as Flink and Spark Streaming) provide SQL interfaces too, but the key difference is a streaming database has its storage. Stream processing engines require a dedicated database to store input and output data. On the other hand, streaming databases utilize cloud-native storage to maintain materialized views and states, allowing data replication and independent storage scaling. - Source: dev.to / 3 months ago
  • Machine Learning Pipelines with Spark: Introductory Guide (Part 1)
    Spark Streaming: The component for real-time data processing and analytics. - Source: dev.to / over 1 year ago
  • Spark for beginners - and you
    Is a big data framework and currently one of the most popular tools for big data analytics. It contains libraries for data analysis, machine learning, graph analysis and streaming live data. In general Spark is faster than Hadoop, as it does not write intermediate results to disk. It is not a data storage system. We can use Spark on top of HDFS or read data from other sources like Amazon S3. It is the designed... - Source: dev.to / over 2 years ago

What are some alternatives?

When comparing Apache Beam and Spark Streaming, you can also consider the following products

Google Cloud Dataflow - Google Cloud Dataflow is a fully-managed cloud service and programming model for batch and streaming big data processing.

Confluent - Confluent offers a real-time data platform built around Apache Kafka.

Apache Airflow - Airflow is a platform to programmaticaly author, schedule and monitor data pipelines.

Amazon Kinesis - Amazon Kinesis services make it easy to work with real-time streaming data in the AWS cloud.

Amazon EMR - Amazon Elastic MapReduce is a web service that makes it easy to quickly process vast amounts of data.

Google BigQuery - A fully managed data warehouse for large-scale data analytics.