Software Alternatives, Accelerators & Startups

Centrifugo VS Apache Beam

Compare Centrifugo VS Apache Beam and see what are their differences

Centrifugo logo Centrifugo

Centrifugo can instantly deliver messages to application online users connected over supported transports (WebSocket, HTTP-streaming, SSE/EventSource, GRPC, SockJS, WebTransport).

Apache Beam logo Apache Beam

Apache Beam provides an advanced unified programming model to implement batch and streaming data processing jobs.
  • Centrifugo Landing page
    Landing page //
    2023-05-06
  • Apache Beam Landing page
    Landing page //
    2022-03-31

Centrifugo videos

No Centrifugo videos yet. You could help us improve this page by suggesting one.

+ Add video

Apache Beam videos

How to Write Batch or Streaming Data Pipelines with Apache Beam in 15 mins with James Malone

More videos:

  • Review - Best practices towards a production-ready pipeline with Apache Beam
  • Review - Streaming data into Apache Beam with Kafka

Category Popularity

0-100% (relative to Centrifugo and Apache Beam)
Testing
100 100%
0% 0
Big Data
0 0%
100% 100
Hard Drive Tools
100 100%
0% 0
Data Dashboard
0 0%
100% 100

User comments

Share your experience with using Centrifugo and Apache Beam. For example, how are they different and which one is better?
Log in or Post with

Social recommendations and mentions

Based on our record, Apache Beam should be more popular than Centrifugo. It has been mentiond 14 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.

Centrifugo mentions (5)

  • Real Time Chat Via Python
    Take a look at Centrifugo - https://centrifugal.dev/ - it provides a way to build efficient real-time messaging system using standard Django without ASGI involved. Source: 11 months ago
  • Communicating between a lot of clients with websockets.
    Hello, I am an author of Centrifugo project (https://centrifugal.dev/). It's a WebSocket server which scales using Redis. Instead of the approach you described when every message is delivered to every server Centrifugo uses PUB/SUB in a way that every server subscribed only to channels which current server connections have. It should scale pretty well, and resubscribe to channels is super-efficient. All the load... Source: over 1 year ago
  • Why is redis used with websockets?
    Hello, I am author of Centrifugo (https://centrifugal.dev/) project - WebSocket server which scales with Redis. We have several blog posts which may help to answer your questions and give you some real world numbers about using Redis for WebSocket apps. Some links:. Source: over 1 year ago
  • Websocket server design
    Https://centrifugal.dev/ It's go native you can even write your own using it's underlying centrifuge library. We use it currently in Production just the docker container to be honest is what we deploy and just use a small config file or flags. Source: over 1 year ago
  • GitHub - centrifugal/centrifuge-js: JavaScript client SDK to communicate with Centrifugo real-time messaging server from browser, NodeJS and React Native. Supports WebSocket, HTTP-streaming, EventSource, WebTransport and SockJS transports
    Hey folks! Centrifugo is an open-source scalable real-time messaging server written in Go language. It's language-agnostic and can be used to build chat apps, live comments, multiplayer games, real-time data visualizations, collaborative tools, etc. In combination with any backend. Including NodeJS-based backend which is relevant to this subreddit. And while Javascript/Node ecosystem has good WebSocket tools, I... Source: almost 2 years ago

Apache Beam mentions (14)

  • Ask HN: Does (or why does) anyone use MapReduce anymore?
    The "streaming systems" book answers your question and more: https://www.oreilly.com/library/view/streaming-systems/9781491983867/. It gives you a history of how batch processing started with MapReduce, and how attempts at scaling by moving towards streaming systems gave us all the subsequent frameworks (Spark, Beam, etc.). As for the framework called MapReduce, it isn't used much, but its descendant... - Source: Hacker News / 4 months ago
  • How do Streaming Aggregation Pipelines work?
    Apache Beam is one of many tools that you can use. Source: 6 months ago
  • Real Time Data Infra Stack
    Apache Beam: Streaming framework which can be run on several runner such as Apache Flink and GCP Dataflow. - Source: dev.to / over 1 year ago
  • Google Cloud Reference
    Apache Beam: Batch/streaming data processing 🔗Link. - Source: dev.to / over 1 year ago
  • Composer out of resources - "INFO Task exited with return code Negsignal.SIGKILL"
    What you are looking for is Dataflow. It can be a bit tricky to wrap your head around at first, but I highly suggest leaning into this technology for most of your data engineering needs. It's based on the open source Apache Beam framework that originated at Google. We use an internal version of this system at Google for virtually all of our pipeline tasks, from a few GB, to Exabyte scale systems -- it can do it all. Source: almost 2 years ago
View more

What are some alternatives?

When comparing Centrifugo and Apache Beam, you can also consider the following products

Crossplane - The open source multicloud control plane. Contribute to crossplane/crossplane development by creating an account on GitHub.

Google Cloud Dataflow - Google Cloud Dataflow is a fully-managed cloud service and programming model for batch and streaming big data processing.

Pushbullet - Pushbullet - Your devices working better together

Apache Airflow - Airflow is a platform to programmaticaly author, schedule and monitor data pipelines.

Notify - Need More Info? Contact us to schedule a demo or request a trial or pricing information and see how Notify's solutions can help your organization. Request Now. © 2016 Notify Technology Corporation.

Amazon EMR - Amazon Elastic MapReduce is a web service that makes it easy to quickly process vast amounts of data.