Imo if you are using the cloud and not doing anything particularly fancy the native tooling is good enough. For AWS that is DMS (for RDBMS) and Kinesis/Lamba (for streams). Google has Data Fusion and Dataflow . Azure hasData Factory if you are unfortunate enough to have to use SQL Server or Azure. Imo the vendored tools and open source tools are more useful when you need to ingest data from SaaS platforms, and... - Source: Reddit / 4 months ago
This sub is for Apache Beam and Google Cloud Dataflow as the sidebar suggests. - Source: Reddit / 8 months ago
I am pretty sure they are using pub/sub with probably a Dataflow pipeline to process all that data. - Source: Reddit / 8 months ago
You can run a Dataflow job that copies the data directly from BQ into S3, though you'll have to run a job per table. This can be somewhat expensive to do. - Source: Reddit / 8 months ago
It was clear we needed something that was built specifically for our big-data SaaS requirements. Dataflow was our first idea, as the service is fully managed, highly scalable, fairly reliable and has a unified model for streaming & batch workloads. Sadly, the cost of this service was quite large. Secondly, at that moment in time, the service only accepted Java implementations, of which we had little knowledge... - Source: dev.to / about 1 year ago
Cloud Dataflow: Stream/batch data processing 🔗Link 🔗Link. - Source: dev.to / 9 months ago
What you are looking for is Dataflow. It can be a bit tricky to wrap your head around at first, but I highly suggest leaning into this technology for most of your data engineering needs. It's based on the open source Apache Beam framework that originated at Google. We use an internal version of this system at Google for virtually all of our pipeline tasks, from a few GB, to Exabyte scale systems -- it can do it all. - Source: Reddit / 9 months ago
The go-to recommendation is to use Dataflow to write your pipeline instead of disjoint functions. You can do something like this:. - Source: Reddit / 10 months ago
With that, the best way to maximize processing and minimize time is to use Dataflow or Dataproc depending on your needs. These systems are highly parallel and clustered, which allows for much larger processing pipelines that execute quickly. - Source: Reddit / over 1 year ago
Stream data into Dataflow pipelines from R. - Source: Reddit / over 1 year ago
I'm not 100% sure, but perhaps Google Cloud Dataflow is similar to Azure Data Factory. - Source: Reddit / over 1 year ago
Apache Beam - Apache Beam is a scalable framework that allows you to implement batch and streaming data processing jobs. It is a framework that you can use in order to create a data pipeline on Google Cloud or on Amazon Web Services. - Source: dev.to / about 2 years ago
Dataflow is Google's implementation of a runner for Apache Beam jobs in Google cloud. Right now, python and java are pretty much the only two options supported for writing Beam jobs that run on Dataflow. - Source: Reddit / about 2 years ago
"Google Cloud’s databases and analytics products such as BigQuery, Dataflow, Pub/Sub and Firestore brought Theta Labs unlimited scale and performance, allowing them to: ...". - Source: Reddit / about 2 years ago
Do you know an article comparing Google Cloud Dataflow to other products?
Suggest a link to a post with product alternatives.
This is an informative page about Google Cloud Dataflow. You can review and discuss the product here. The primary details have not been verified within the last quarter, and they might be outdated. If you think we are missing something, please use the means on this page to comment or suggest changes. All reviews and comments are highly encouranged and appreciated as they help everyone in the community to make an informed choice. Please always be kind and objective when evaluating a product and sharing your opinion.