Hadoop might be a bit more popular than DuckDB. We know about 16 links to it since March 2021 and only 15 links to DuckDB. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
I have lived through the hype of Big data it was a time of HDFS+HTable I guess and Hapoop etc. One can't go wrong with DuckDB+SQLite+Open/Elasticsearch either with 6 to 8 even 10 TB of data. [0]. https://duckdb.org/. - Source: Hacker News / 20 days ago
More than once, I have been in a situation where I needed to query CloudTrail logs but was working in a customer environment where they weren’t aggregated to a search interface. Another similar situation is when CloudTrail data events are disabled for cost reasons but need to be temporarily turned on for troubleshooting/audit purposes. While the CloudTrail console offers some (very) limited lookups (for management... - Source: dev.to / 27 days ago
DuckDB: An in-process SQL OLAP database management system. While not a traditional OLAP database, DuckDB is designed to execute analytical queries efficiently, making it suitable for analytical workloads within data-intensive applications. - Source: dev.to / 4 months ago
Easiest way to practically use SIMD table scan database is try out DuckDB: https://duckdb.org/. - Source: Hacker News / 4 months ago
Duckdb so we can make OLAP like queries on the data. - Source: dev.to / 7 months ago
Data analysis software is also widely used in the telecommunications industry to manage network performance, detect fraud, and analyze customer data. Telecommunications companies can use data analysis software to analyze network data in real-time, allowing them to identify and address issues quickly. In addition, data analysis software can help telecommunications companies identify new revenue streams and improve... - Source: dev.to / 11 days ago
Did you check out tools like https://hadoop.apache.org/ ? Source: about 1 year ago
There are different ways to implement parallel dataflows, such as using parallel data processing frameworks like Apache Hadoop, Apache Spark, and Apache Flink, or using cloud-based services like Amazon EMR and Google Cloud Dataflow. It is also possible to use parallel dataflow frameworks to handle big data and distributed computing, like Apache Nifi and Apache Kafka. Source: over 1 year ago
There are several frameworks available for batch processing, such as Hadoop, Apache Storm, and DataTorrent RTS. - Source: dev.to / over 1 year ago
A copy of Hadoop installed on each of these machines. You can download Hadoop from the Apache website, or you can use a distribution like Cloudera or Hortonworks. - Source: dev.to / over 1 year ago
Apache Druid - Fast column-oriented distributed data store
Apache Spark - Apache Spark is an engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing.
OctoSQL - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL. - cube2222/octosql
Apache Cassandra - The Apache Cassandra database is the right choice when you need scalability and high availability without compromising performance.
MonetDB - Column-store database
PostgreSQL - PostgreSQL is a powerful, open source object-relational database system.