I'd you're trying to bring this to a wider audience not already familiar with q, then the name collision with the more widely known jq project is a problem: https://stedolan.github.io/jq/. - Source: Hacker News / 4 months ago
First, we need to install jq via the installer available at https://stedolan.github.io/jq/. - Source: dev.to / 5 months ago
JQ is a lightweight and powerful command-line JSON processor. It's a time-saving tool for manipulating and extracting data from JSON files effortlessly. - Source: dev.to / 7 months ago
If you have jq installed you can use it to make the output look nicer. - Source: dev.to / 11 months ago
It requires jq for JSON processing and GNU parallel for concurrent searches in the notebooks. - Source: dev.to / 12 months ago
The jq command (https://stedolan.github.io/jq/) is useful pulling that information out. Source: about 1 year ago
I would use the tool jq (https://stedolan.github.io/jq/) for this. Source: about 1 year ago
You can pull your API key out of the settings section and the folder ID from the sync config for that folder. The output is in JSON which you can process with jq from the command line, or just make that call in your favorite programming language and process the JSON from there. Source: about 1 year ago
OK, I threw in a lot of commands here. "\$select=count(*), zip_code" Selecting the count and zip_code SoQL count function to count the number of rows that match our search criteria. $group=zip_code Similar to the SQL GROUP BY Returns aggregate rows grouped by the zip_code jq -r '.[] | .zip_code + " " + .count' Using the very useful jq to do additional filtering Jq bills itself as, “a lightweight... - Source: dev.to / about 1 year ago
One of the best benefits of PQ is the low bar for getting started. As you advance your skills you will learn more and better ways to work with data. I routinely process 10+ GB of JSON files through PQ. When I first started I was grateful just to be able to automate. Even if it took hours to process. Then I started tweaking and cut the time in half (I recently cut the time down to <10 minutes using jq to... Source: about 1 year ago
Here's bash-notes, it's written 100% in bash and the only dependency it has is jq, it can add, edit, remove or list notes, it uses VIM or whatever editor you may like, you can specify which terminal emulator to use and what options to pass when you execute it. Source: about 1 year ago
To the best of my knowledge, Vim's problems with large files are usually about long lines. If your JSON file is all on one line, that's going to be a much, much bigger problem than if it's formatted on many lines. Try formatting the JSON with jq . beforehand: jq. See if there's a difference. Source: about 1 year ago
If they had used an .nvmrc file instead, I could run nvm use to ensure I'm using the correct node version. In order to get this same type of behavior I use a function I wrote called nvmpe that uses jq to get the node version from the value from package.json and then nvm to set my node version. Source: about 1 year ago
Hi! It's more like writing jq in your struct definitions and executing them with serde::Deserialize. Source: about 1 year ago
I would add jq to the data wrangling section https://stedolan.github.io/jq/ Being moderately competant at jq on a team that doesn't know jq is a super power. - Source: Hacker News / about 1 year ago
Hey I just wanted to drop two tools here that you might find useful: the first is jq (JSON Query) which is a standard command-line tool for working with JSON data. It's one of those tiny tools with a thousand-page manual full of features you'll probably never need, but if you're working with automating JSON you should definitely get an idea of what it does. Source: about 1 year ago
If you are dealing with large JSONS you may want to look into jq. Runs on Powershell and create a filter file to avoid the windows escaping of quotes. Source: about 1 year ago
First, a brief introduction to jq — a wonderful tool for manipulating JSON on the command line — like sed for JSON data as the official website describes it. It is truly powerful and able to manipulate JSON in a myriad of ways. When run with no arguments, it simply formats the given document:. - Source: dev.to / over 1 year ago
One of the newer tools that people need to learn is jq. You should almost never be manually trying to parse JSON, unless it's something stupid simple like grep for a simple value. Even then, if you know it will be in a particular key, use jq to extract it so you know you got exactly the value you're looking for. Source: over 1 year ago
To start the SSM session, you need the EC2 instance ID. You can either get it from the AWS Console, or run the following with appropriate filters if you have lots of instances. The following assumes you have jq installed and have instance tags. - Source: dev.to / over 1 year ago
Because in theory if JSON support is builtin (and it is not full featured like JAXB or Jackson) a strong use case would be to use it for zero dependency scripts (jep 330). I guess think of "bash + jq + curl" but plain builtin Java instead. Source: over 1 year ago
Do you know an article comparing jq to other products?
Suggest a link to a post with product alternatives.
This is an informative page about jq. You can review and discuss the product here. The primary details have not been verified within the last quarter, and they might be outdated. If you think we are missing something, please use the means on this page to comment or suggest changes. All reviews and comments are highly encouranged and appreciated as they help everyone in the community to make an informed choice. Please always be kind and objective when evaluating a product and sharing your opinion.