Apify is a JavaScript & Node.js based data extraction tool for websites that crawls lists of URLs and automates workflows on the web. With Apify you can manage and automatically scale a pool of headless Chrome / Puppeteer instances, maintain queues of URLs to crawl, store crawling results locally or in the cloud, rotate proxies and much more.
Simple scraper is the easiest way to scrape the web — turn any website into an API in seconds and use ready-made scraping recipes to scrape popular sites with ease.
Apify might be a bit more popular than Simple Scraper. We know about 26 links to it since March 2021 and only 20 links to Simple Scraper. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
For deployment, we'll use the Apify platform. It's a simple and effective environment for cloud deployment, allowing efficient interaction with your crawler. Call it via API, schedule tasks, integrate with various services, and much more. - Source: dev.to / 3 days ago
We already have a fully functional implementation for local execution. Let us explore how to adapt it for running on the Apify Platform and transform in Apify Actor. - Source: dev.to / about 1 month ago
We've had the best success by first converting the HTML to a simpler format (i.e. markdown) before passing it to the LLM. There are a few ways to do this that we've tried, namely Extractus[0] and dom-to-semantic-markdown[1]. Internally we use Apify[2] and Firecrawl[3] for Magic Loops[4] that run in the cloud, both of which have options for simplifying pages built-in, but for our Chrome Extension we use... - Source: Hacker News / 8 months ago
Developed by Apify, it is a Python adaptation of their famous JS framework crawlee, first released on Jul 9, 2019. - Source: dev.to / 8 months ago
Hey all, This is Jan, the founder of [Apify](https://apify.com/)—a full-stack web scraping platform. After the success of [Crawlee for JavaScript](https://github.com/apify/crawlee/) today! The main features are: - A unified programming interface for both HTTP (HTTPX with BeautifulSoup) & headless browser crawling (Playwright). - Source: Hacker News / 10 months ago
Making my data extraction Saas (https://simplescraper.io) more LLM friendly. Markdown extraction, improved Google search, workflows - search for this terms, visit the first N links, summarize etc. Big demand for (or rather, expectation of) this lately. - Source: Hacker News / 6 months ago
Things are much easier for one-person startups these days—it's a gift. I remember building a todo app as my first SaaS project, and choosing something called Stormpath for authentication. It subsequently shut down, forcing me to do a last-minute migration from a hostel in Japan using Nitrous Cloud IDE (which also shut down). Just pain upon pain.[1] Now, you can just pick a full-stack cloud service and run with it.... - Source: Hacker News / 11 months ago
Simplescraper — Trigger your webhook after each operation. The free plan includes 100 cloud scrape credits. - Source: dev.to / about 1 year ago
I run Simplescraper (https://simplescraper.io). Started in 2020 and it's now profitable. > Have any recent trends affected your business? Not really. People like data as much as ever. As a one-person biz, the main dilemma remains how to juggle development, marketing and support. Reaching a point where the price of context-switching to customer support is becoming a little too high. But that's easily fixable and... - Source: Hacker News / about 2 years ago
> perhaps you can simply ask the API to create Python or JS code that is deterministic, instead. Had a conversation last week with a customer that did exactly that - spent 15 minutes in chatGPT generating working Scrapy code. Neat to see people solve their own problem so easily but it doesn't yet erode our value. I run https://simplescraper.io and a lot of value is integrations,scale, proxies, scheduling,... - Source: Hacker News / about 2 years ago
import.io - Import. io helps its users find the internet data they need, organize and store it, and transform it into a format that provides them with the context they need.
Octoparse - Octoparse provides easy web scraping for anyone. Our advanced web crawler, allows users to turn web pages into structured spreadsheets within clicks.
Scrapy - Scrapy | A Fast and Powerful Scraping and Web Crawling Framework
Content Grabber - Content Grabber is an automated web scraping tool.
ParseHub - ParseHub is a free web scraping tool. With our advanced web scraper, extracting data is as easy as clicking the data you need.
Diggernaut - Web scraping is just became easy. Extract any website content and turn it into datasets. No programming skills required.