Software Alternatives & Reviews

Apache Kafka

Apache Kafka is an open-source message broker project developed by the Apache Software Foundation written in Scala.

Stream Processing Data Integration ETL Web Service Automation Monitoring Tools

Apache Kafka Reviews and details

Apache Kafka Landing Page
Apache Kafka Landing Page

Videos

Apache Kafka Tutorial | What is Apache Kafka? | Kafka Tutorial for Beginners | Edureka

Apache Kafka - Getting Started - Kafka Multi-node Cluster - Review Properties

4. Apache Kafka Fundamentals | Confluent Fundamentals for Apache Kafka®

Social recommendations and mentions

We have tracked the following product recommendations or mentions on various public social media platforms and blogs. They can help you see what people think about Apache Kafka and what they use it for.
  • JR, quality Random Data from the Command line, part II
    In the first part of this series, we have seen how to use JR in simple use cases to stream random data from predefined templates to standard out and Apache Kafka on Confluent Cloud. - Source: dev.to / 5 days ago
  • Exploring Async PHP
    The use of queues such as Amazon SQS, RabbitMQ or Apache Kafka has been a widely accepted solution for some time. - Source: dev.to / 6 days ago
  • Best way to schedule events and handle them in the future?
    The second approach is to use a message queue, as some others have suggested. The most powerful of these is probably Kafka, but it's almost certainly overkill. (Technically, Kafka is an event log, not a message queue, but that's semantics at this point). Source: 11 days ago
  • Top 6 message queues for distributed architectures
    Apache Kafka is an open-source, distributed event streaming platform with message communication and storage capabilities. Although Kafka is not technically a message queue, it has the functionality of a message queue using a topic partition. - Source: dev.to / 18 days ago
  • Amazon Ditches Microservices for Monolith: Decoding Prime Video's Architectural Shift
    When it comes to the limitations of AWS Step Functions, let us look at what it was doing. Step Functions handled communication between the different steps of their stream quality architecture and error handling. When it comes to communication between services, tools like Kafka exist and can be used to transfer data (or state) between services. Kafka uses a pub/sub (publish and subscribe) messaging model that... - Source: dev.to / 19 days ago
  • Kafka vs. Redpanda Performance – Do the claims add up?
    "The original use case for Kafka was to be able to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds. This means site activity (page views, searches, or other actions users may take) is published to central topics with one topic per activity type." https://kafka.apache.org/. - Source: Hacker News / 21 days ago
  • HRV-Mart
    In order to create a scalable back-end I use micro-service architecture. Current version of HRV-Mart back-end consist of Product-Microservice, User-Microservice, Auth-Microservice, Order-Microservice, Cart-Microservice, Like-Micorservice and API-Gateway. Above micro-services are loosely couple and communication between them happens via Apache Kafka. In order to make them more secure, I added unit tests. The master... - Source: dev.to / 29 days ago
  • JR, quality Random Data from the Command line, part I
    So, is JR yet another faking library written in Go? Yes and no. JR indeed implements most of the APIs in fakerjs and Go fake it, but it's also able to stream data directly to stdout, Kafka, Redis and more (Elastic and MongoDB coming). JR can talk directly to Confluent Schema Registry, manage json-schema and Avro schemas, easily maintain coherence and referential integrity. If you need more than what is OOTB in JR,... - Source: dev.to / about 1 month ago
  • Multi-Stream Joins With SQL
    If you're looking to perform stream-to-stream joins in SQL, a streaming database helps you get the most out of your data. By using the streaming database, you can run SQL queries continuously on single streams, and join two or more streams. Much like other popular RDBMS (relational database management system), a streaming database can join together any two datasets/tables expressions using... - Source: dev.to / about 1 month ago
  • Querying microservices in real-time with materialized views
    RisingWave is an open-source streaming database that has built-in fully-managed CDC source connectors for various databases, also it can collect data from other sources such Kafka, Pulsar, Kinesis, or Redpanda and it allows you to query real-time streams using SQL. You can get a materialized view that is always up-to-date. - Source: dev.to / about 1 month ago
  • Shared entities in a microservices architecture
    Https://kafka.apache.org/ is another messaging service that has more tutorials and the like. Source: about 1 month ago
  • Modern stack to build a real-time event-driven app
    The first component is a database that acts as a data source, which can be PostgreSQL (Other popular options include MongoDB or MySQL). As data changes in the database, a change is detected using the Log-based CDC (Change Data Capture) capabilities of the database. It captures the change and records it in a transaction log. The captured changes are then transformed into a change event that can be consumed in... - Source: dev.to / about 2 months ago
  • How Change Data Capture (CDC) Works with Streaming Database
    A streaming database is a type of database that is designed to handle continuous data streams in real-time and makes it possible to query this data. You can read more about how a Streaming database differs from a Traditional database and how to choose the right streaming database in my other blog posts. CDC is particularly useful when working with streaming databases, you can ingest CDC data from directly... - Source: dev.to / 2 months ago
  • How Streaming database differs from a Traditional database?
    For example, RisingWave is one of the fastest-growing open-source streaming databases that can ingest data from Apache Kafka, Apache Pulsar, Amazon Kinesis, Redpanda, and databases via native Change data capture connections or using Debezium connectors to MySQL and PostgreSQL sources. Previously, I wrote a blog post about how to choose the right streaming database that discusses some key factors that you should... - Source: dev.to / 2 months ago
  • Query Real Time Data in Kafka Using SQL
    Apache Kafka is a distributed streaming platform that allows you to store and process real-time data streams. It is commonly used in modern data architectures to capture and analyze user interactions with web and mobile applications, as well as IoT device data, logs, and system metrics. It is often used for real-time data processing, data pipelines, and event-driven applications. However, querying data stored in... - Source: dev.to / 3 months ago
  • Discussion Thread
    Idk I’m just making a really terrible pun about Kafka. Source: 3 months ago
  • Cues: low-latency persistent blocking queues, processors, and graphs via ChronicleQueue
    The processors and graphs are meant to provide a dead-simple version of the abstractions you get in a distributed messaging system like Kafka. But there are no clusters to configure, no partitions to worry about, and it is several orders of magnitude faster. Source: 3 months ago
  • How to choose the right streaming database
    You can ingest data from different data sources such as message brokers Kafka, Redpanda, Kinesis, Pulsar, or databases MySQL or PostgreSQL using their Change Data Capture (CDC) which is the process of identifying and capturing data changes. - Source: dev.to / 3 months ago
  • Building a realtime performance monitoring system with Kafka and Go
    Recently, I had a chance to try out Apache's Kafka for a monitoring service and I was pleasantly surprised how you could set up a full fledged event streaming system in a few lines of code. I quickly realised we could be building powerful systems with Kafka at the centre of things. Notification systems, distributed database synchronisations, monitoring systems are some of the applications that come to mind when... - Source: dev.to / 4 months ago
  • Unit Testing Backward Compatibility of Message Format
    Do you apply Apache Kafka or RabbitMQ in your software project? If so, then you definitely have some message schemas. Have you ever encountered in backward compatibility issue? An accidental message format changing and your entire system is no longer functioning? I bet you have such an unpleasant experience at least once in your career. - Source: dev.to / 4 months ago
  • Forward Compatible Enum Values in API with Java Jackson
    Suppose we develop the service that consumes data from one input (e.g. Apache Kafka, RabbitMQ, etc.), deduplicates messages, and produces the result to some output. Look at the diagram below that describes the process. - Source: dev.to / 4 months ago

External sources with reviews and comparisons of Apache Kafka

6 Best Kafka Alternatives: 2022’s Must-know List
With a robust suite of components based on communities like Apache Kafka and ActiveMQ, Red Hat AMQ offers a secure and lightweight solution message delivery and one of the best Kafka Alternatives. Compared to most streaming tools, Red Hat AMQ has faster execution and offers a flexible messaging tool that allows instant communication. Consequently, Red Hat AMQ effectively meets organizational needs and integrates...
Top 15 Kafka Alternatives Popular In 2021
Red Hat AMQ is a powerful suite of components that depend upon communities like Apache Kafka and Apache ActiveMQ to offer a secure and lightweight solution. It is fast in execution and is a flexible messaging tool through which instant delivery of information can be done. It offers a quick response to organizational needs and integrates apps seamlessly across the enterprise.
Top 10 Popular Open-Source ETL Tools for 2021
Apache Kafka is an Open-Source Data Streaming Tool written in Scala and Java. It publishes and subscribes to a stream of records in a fault-tolerant manner and provides a unified, high-throughput, and low-latency platform to manage data.
Top ETL Tools For 2021...And The Case For Saying "No" To ETL
Apache Kafka is an open source platform written in Scala and Java. It provides a unified, high-throughput, low-latency platform for managing real-time data. Kafka publishes and subscribes to a stream of records in a fault-tolerant way, immediately as they occur.
5 Best-Performing Tools that Build Real-Time Data Pipeline
Apache Kafka is also a leading technology that streams real-time data pipeline. It is an open-source distributed streamline platform which is useful in building real-time data pipelines and stream processing applications. Enterprises use Apache Kafka for the management of peak data ingestion loads and also as a big data message bus. The capabilities of Apache Kafka to manage peak data ingestion loads are a unique...

Do you know an article comparing Apache Kafka to other products?
Suggest a link to a post with product alternatives.

Generic Apache Kafka discussion

Log in or Post with

This is an informative page about Apache Kafka. You can review and discuss the product here. The primary details have not been verified within the last quarter, and they might be outdated. If you think we are missing something, please use the means on this page to comment or suggest changes. All reviews and comments are highly encouranged and appreciated as they help everyone in the community to make an informed choice. Please always be kind and objective when evaluating a product and sharing your opinion.