No OctoSQL videos yet. You could help us improve this page by suggesting one.
Based on our record, Apache Spark should be more popular than OctoSQL. It has been mentiond 70 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
Apache Iceberg defines a table format that separates how data is stored from how data is queried. Any engine that implements the Iceberg integration — Spark, Flink, Trino, DuckDB, Snowflake, RisingWave — can read and/or write Iceberg data directly. - Source: dev.to / 11 days ago
Apache Spark powers large-scale data analytics and machine learning, but as workloads grow exponentially, traditional static resource allocation leads to 30–50% resource waste due to idle Executors and suboptimal instance selection. - Source: dev.to / 13 days ago
One of the key attributes of Apache License 2.0 is its flexible nature. Permitting use in both proprietary and open source environments, it has become the go-to choice for innovative projects ranging from the Apache HTTP Server to large-scale initiatives like Apache Spark and Hadoop. This flexibility is not solely legal; it is also philosophical. The license is designed to encourage transparency and maintain a... - Source: dev.to / about 2 months ago
[1] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach. Pearson, 2020. [2] F. Chollet, Deep Learning with Python. Manning Publications, 2018. [3] C. C. Aggarwal, Data Mining: The Textbook. Springer, 2015. [4] J. Dean and S. Ghemawat, "MapReduce: Simplified Data Processing on Large Clusters," Communications of the ACM, vol. 51, no. 1, pp. 107-113, 2008. [5] Apache Software Foundation, "Apache... - Source: dev.to / about 2 months ago
If you're designing an event-based pipeline, you can use a data streaming tool like Kafka to process data as it's collected by the pipeline. For a setup that already has data stored, you can use tools like Apache Spark to batch process and clean it before moving ahead with the pipeline. - Source: dev.to / 3 months ago
This looks extremely cool. This is basically incremental view maintenance in databases, a problem that almost everybody (I think) has when using SQL databases and wanting to do some derived views for more performant access patterns. Importantly, they seem to support a wide breath of SQL operators, and it's open-source! There's already a bunch of tools in this area: 1. Materialize[0], which afaik is more... - Source: Hacker News / 7 months ago
OctoSQL[0] or DuckDB[1] will most likely be much simpler, while going through 10 GB of JSON in a couple seconds at most. Disclaimer: author of OctoSQL [0]: https://github.com/cube2222/octosql. - Source: Hacker News / about 2 years ago
This is really cool! With their Postgres scanner[0] you can now easily query multiple datasources using SQL and join between them (i.e. Postgres table with JSON file). Something I strived to build with OctoSQL[1] before. It's amazing to see how quickly DuckDB is adding new features. Not a huge fan of C++, which is right now used for authoring extensions, it'd be really cool if somebody implemented a Rust extension... - Source: Hacker News / about 2 years ago
Congrats on the Show HN! It's great to see more tools in this area (querying data from various sources in-place) and the Lambda use case is a really cool idea! I've recently done a bunch of benchmarking, including ClickHouse Local and the usage was straightforward, with everything working as it's supposed to. Just to comment on the performance area though, one area I think ClickHouse could still possibly improve... - Source: Hacker News / over 2 years ago
SPyQL is really cool and its design is very smart, with it being able to leverage normal Python functions! As far as similar tools go, I recommend taking a look at DataFusion[0], dsq[1], and OctoSQL[2]. DataFusion is a very (very very) fast command-line SQL engine but with limited support for data formats. Dsq is based on SQLite which means it has to load data into SQLite first, but then gives you the whole breath... - Source: Hacker News / over 2 years ago
Apache Flink - Flink is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations.
Materialize - A Streaming Database for Real-Time Applications
Hadoop - Open-source software for reliable, scalable, distributed computing
Steampipe - Steampipe: select * from cloud; The extensible SQL interface to your favorite cloud APIs select * from AWS, Azure, GCP, Github, Slack etc.
Apache Hive - Apache Hive data warehouse software facilitates querying and managing large datasets residing in distributed storage.
LNAV - The Log File Navigator (lnav) is an advanced log file viewer for the console.