Software Alternatives & Reviews

Apache Beam VS Benthos

Compare Apache Beam VS Benthos and see what are their differences

Apache Beam logo Apache Beam

Apache Beam provides an advanced unified programming model to implement batch and streaming data processing jobs.

Benthos logo Benthos

Stream data processor written in golang with yaml pipeline configuration.
  • Apache Beam Landing page
    Landing page //
    2022-03-31
  • Benthos Landing page
    Landing page //
    2023-02-06

Apache Beam

Categories
  • Big Data
  • Data Dashboard
  • Data Warehousing
  • Big Data Tools
Website beam.apache.org
Details $

Benthos

Categories
  • Workflow Automation
  • ETL
  • Data Dashboard
  • Analytics
Website benthos.dev
Details $

Apache Beam videos

How to Write Batch or Streaming Data Pipelines with Apache Beam in 15 mins with James Malone

More videos:

  • Review - Best practices towards a production-ready pipeline with Apache Beam
  • Review - Streaming data into Apache Beam with Kafka

Benthos videos

Aquastar Benthos/Seiko 5717/Lemania 5100: A Historical Review of Centrally Mounted Chronographs

More videos:

  • Review - Benthos: Intertidal Zone
  • Review - Benthos: Crabs, Coral, and More

Category Popularity

0-100% (relative to Apache Beam and Benthos)
Big Data
100 100%
0% 0
ETL
0 0%
100% 100
Data Dashboard
85 85%
15% 15
Workflow Automation
0 0%
100% 100

User comments

Share your experience with using Apache Beam and Benthos. For example, how are they different and which one is better?
Log in or Post with

Social recommendations and mentions

Based on our record, Benthos should be more popular than Apache Beam. It has been mentiond 22 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.

Apache Beam mentions (14)

  • Ask HN: Does (or why does) anyone use MapReduce anymore?
    The "streaming systems" book answers your question and more: https://www.oreilly.com/library/view/streaming-systems/9781491983867/. It gives you a history of how batch processing started with MapReduce, and how attempts at scaling by moving towards streaming systems gave us all the subsequent frameworks (Spark, Beam, etc.). As for the framework called MapReduce, it isn't used much, but its descendant... - Source: Hacker News / 2 months ago
  • How do Streaming Aggregation Pipelines work?
    Apache Beam is one of many tools that you can use. Source: 4 months ago
  • Real Time Data Infra Stack
    Apache Beam: Streaming framework which can be run on several runner such as Apache Flink and GCP Dataflow. - Source: dev.to / over 1 year ago
  • Google Cloud Reference
    Apache Beam: Batch/streaming data processing 🔗Link. - Source: dev.to / over 1 year ago
  • Composer out of resources - "INFO Task exited with return code Negsignal.SIGKILL"
    What you are looking for is Dataflow. It can be a bit tricky to wrap your head around at first, but I highly suggest leaning into this technology for most of your data engineering needs. It's based on the open source Apache Beam framework that originated at Google. We use an internal version of this system at Google for virtually all of our pipeline tasks, from a few GB, to Exabyte scale systems -- it can do it all. Source: over 1 year ago
View more

Benthos mentions (22)

  • Ask HN: Anyone looking for contributors for their open source projects
    If you're interested in Golang and data streaming, https://benthos.dev is a good project to contribute to. There are quite a few issues open on the GitHub project which anyone can pick up. Writing new connectors and adding tests / docs is always a good place to start. The maintainer is super-friendly and he's always active on the https://benthos.dev/community channels. I'm also there most of the time, since I've... - Source: Hacker News / 8 days ago
  • Seeking Insights on Stream Processing Frameworks: Experiences, Features, and Onboarding
    I have been working in the stream processing space since 2020 and I used Benthos. Since Benthos is a stateless stream processor, I have other components around it which deal with various types of application state, such as Kafka, NATS, Redis, various flavours of SQL databases, MongoDB etc. Source: 11 months ago
  • Realistic Stack for One Person to implement/ maintain in an SMB?
    You might want to add Benthos to your stack. It’s Open Source and it works great for data streaming tasks. You could have your task orchestrator (Airflow, Flyte etc) run it on demand. I demoed it at KnativeCon last year. Source: about 1 year ago
  • What made you fall in love with Go?
    A few years ago, I found Benthos (the Open Source data streaming processor) and it was really easy to dive into it and add new features. Going through the various 3rd party libraries that it includes is usually straightforward and I'm comfortable enough with the language and various design patterns now to quickly get what's going on. That was rarely the case with C++. Source: about 1 year ago
  • Minimal OAuth provider in Benthos and Bloblang
    This is a miniature OAuth provider implemented in Benthos and Bloblang. It is designed to serve a single OAuth client app and will generate JWT access tokens with limited lifetime. Source: about 1 year ago
View more

What are some alternatives?

When comparing Apache Beam and Benthos, you can also consider the following products

Google Cloud Dataflow - Google Cloud Dataflow is a fully-managed cloud service and programming model for batch and streaming big data processing.

Apache NiFi - An easy to use, powerful, and reliable system to process and distribute data.

Apache Airflow - Airflow is a platform to programmaticaly author, schedule and monitor data pipelines.

Amazon EMR - Amazon Elastic MapReduce is a web service that makes it easy to quickly process vast amounts of data.

Apache Flink - Flink is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations.

Google BigQuery - A fully managed data warehouse for large-scale data analytics.