Build real-time ETL/ELT and CDC data pipelines from SaaS API, RDBMS, HTTP, and webhook to the cloud data warehouse within a no-code UI.
Based on our record, Estuary Flow seems to be a lot more popular than Confluent. While we know about 14 links to Estuary Flow, we've tracked only 1 mention of Confluent. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
SEEKING FREELANCER | Python Developer | Remote (Within 3 hours of EST) Estuary is a dynamic company focused on developing cutting-edge real-time data integration solutions. Our platform is powered by an open-source repository of pre-built data connectors, making data exchange between systems seamless. https://estuary.dev/ We are seeking a passionate and talented Software Engineer to help expand our catalog of data... - Source: Hacker News / about 1 month ago
I work at Estuary, which is itself a streaming data pipeline. We actually use that approach to power all of the data processing statistics we show in our UI. Lately we've been processing ~200-300 transactions per second (each transaction produces a stats event), and the stats queries in the dashboard are quite snappy. We actually pre-aggregate by minute, hour, and day in order to serve queries of larger time... Source: 5 months ago
Estuary (https://estuary.dev ; I'm CTO) gives you a real time data lake'd change log of all the changes happening in your database in your cloud storage -- complete with log sequence number, database time, and even before/after states if you use REPLICA IDENTITY FULL -- with no extra setup in your production DB. By default, if you then go on to materialize your collections somewhere else (like Snowflake), you get... - Source: Hacker News / 8 months ago
Disclaimer: I work for a streaming ETL startup (estuary.dev) with a connector for Kafka and ability to share data. I'm wondering if Confluent's currently functionality is missing features by not more easily enabling to push shared streams into the consumer.... Or just generally other things that are on the 'wish list' of those sharing / receiving topics. Source: 8 months ago
Hi, I'm Estuary's CTO (https://estuary.dev). Mind speaking a bit more about what didn't work? We put quite a bit of effort into our CDC connectors, as it's a core competency. We have numerous customers using them at scale successfully, but they can be a bit nuanced to get configured. We're constantly trying to make our onboarding experience more intuitive and seamless... it's a hard problem. - Source: Hacker News / 10 months ago
We’re going to setup a Kafka cluster using confluent.io, create a producer and consumer as well as enhance our behavior driven tests to include the new interface. We’re going to update our helm chart so that the updates are seamless to Kubernetes and we’re going to leverage our observability stack to propagate the traces in the published messages. Source: about 2 years ago
Fivetran - Fivetran offers companies a data connector for extracting data from many different cloud and database sources.
Amazon Kinesis - Amazon Kinesis services make it easy to work with real-time streaming data in the AWS cloud.
Striim - Striim provides an end-to-end, real-time data integration and streaming analytics platform.
Apache Flink - Flink is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations.
Tonkean - AI powered dashboard with automatic insights from your team
Spark Streaming - Spark Streaming makes it easy to build scalable and fault-tolerant streaming applications.