Based on our record, Delta Lake should be more popular than Google Cloud Dataflow. It has been mentiond 31 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
Delta is pretty great, let's you do upserts into tables in DataBricks much easier than without it. I think the website is here: https://delta.io. - Source: Hacker News / 4 months ago
Apache Iceberg is one of the three types of lakehouse, the other two are Apache Hudi and Delta Lake. - Source: dev.to / 5 months ago
The Apache Spark / Databricks community prefers Apache parquet or Linux Fundation's delta.io over json. Source: 5 months ago
Databricks provides Jupyter lab like notebooks for analysis and ETL pipelines using spark through pyspark, sparkql or scala. I think R is supported as well but it doesn't interop as well with their newer features as well as python and SQL do. It interfaces with cloud storage backend like S3 and offers some improvements to the parquet format of data querying that allows for updating, ordering and merged through... - Source: Hacker News / 10 months ago
Structured, Semi-structured and Unstructured can be stored in one single format, a lakehouse storage format like Delta, Iceberg or Hudi (assuming those don't require low-latency SLAs like subsecond). Source: 11 months ago
Imo if you are using the cloud and not doing anything particularly fancy the native tooling is good enough. For AWS that is DMS (for RDBMS) and Kinesis/Lamba (for streams). Google has Data Fusion and Dataflow . Azure hasData Factory if you are unfortunate enough to have to use SQL Server or Azure. Imo the vendored tools and open source tools are more useful when you need to ingest data from SaaS platforms, and... Source: over 1 year ago
This sub is for Apache Beam and Google Cloud Dataflow as the sidebar suggests. Source: over 1 year ago
I am pretty sure they are using pub/sub with probably a Dataflow pipeline to process all that data. Source: over 1 year ago
You can run a Dataflow job that copies the data directly from BQ into S3, though you'll have to run a job per table. This can be somewhat expensive to do. Source: over 1 year ago
It was clear we needed something that was built specifically for our big-data SaaS requirements. Dataflow was our first idea, as the service is fully managed, highly scalable, fairly reliable and has a unified model for streaming & batch workloads. Sadly, the cost of this service was quite large. Secondly, at that moment in time, the service only accepted Java implementations, of which we had little knowledge... - Source: dev.to / almost 2 years ago
Amazon SageMaker - Amazon SageMaker provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly.
Google BigQuery - A fully managed data warehouse for large-scale data analytics.
GeoSpock - GeoSpock is the platform for data lake management, providing a unified view of the data assets within an organization and making it easily accessible.
Amazon EMR - Amazon Elastic MapReduce is a web service that makes it easy to quickly process vast amounts of data.
Cloud Dataprep - Cloud Dataprep by Trifacta is a data prep & cleansing service for exploring, cleaning & preparing datasets using a simple drag & drop browser environment
Databricks - Databricks provides a Unified Analytics Platform that accelerates innovation by unifying data science, engineering and business.What is Apache Spark?