Efficiency
Scrapy is designed to be efficient and robust, capable of handling multiple tasks simultaneously and scraping large websites in a fast and reliable manner.
Built-in Tooling
Scrapy comes with built-in tools for handling common tasks such as following links, extracting data using XPath and CSS, and exporting data in a variety of formats.
Customization
Scrapy offers extensive customization options, allowing users to build complex spiders and modify their behavior through middleware and pipelines.
Python Integration
Being a Python framework, Scrapy integrates seamlessly with the Python ecosystem, enabling the use of libraries like Pandas, NumPy, and others to process and analyze scraped data.
Community Support
Scrapy has a large and active community, providing extensive documentation, tutorials, and third-party extensions to enhance functionality.
Asynchronous Processing
Scrapy’s asynchronous processing model enhances performance by allowing multiple concurrent requests, reducing the time required for crawling sites.
Promote Scrapy. You can add any of these badges on your website.
One might ask, what about Scrapy? I'll be honest: I don't really keep up with their updates. But I haven't heard about Zyte doing anything to bypass TLS fingerprinting. So out of the box Scrapy will also be blocked, but nothing is stopping you from using curl_cffi in your Scrapy Spider. - Source: dev.to / 18 days ago
Install scrapy (Offical website) either using pip or conda (Follow for detailed instructions):. - Source: dev.to / about 2 months ago
Using Scrapy I fetched the data needed (activities and attendance). Scrapy handled authentication using a form request in a very simple way:. - Source: dev.to / 3 months ago
Scrapy is an open-source Python-based web scraping framework that extracts data from websites. With Scrapy, you create spiders, which are autonomous scripts to download and process web content. The limitation of Scrapy is that it does not work very well with JavaScript rendered websites, as it was designed for static HTML pages. We will do a comparison later in the article about this. - Source: dev.to / 4 months ago
While there is no specific library for SERP, there are some web scraping libraries that can do the Google Search Page Ranking. One of them which is quite famous is Scrapy - It is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It offers rich developer community support and has been used by more than 50+ projects. - Source: dev.to / 9 months ago
If you're looking for a turn-key solution, I'd have to dig a little. I generally write a scraper in python that dumps into a database or flat file (depending on number of records I'm hunting). Scraping is a separate subject, but once you write one you can generally reuse relevant portions for many others. If you can get adept at a scraping framework like Scrapy you can do it fairly quickly, but there aren't many... - Source: Hacker News / about 1 year ago
I know this might not be a good answer, as it's not .NET, but we use https://scrapy.org/ (Python). Source: about 1 year ago
Take a look at Scrapy. It has a fairly advanced throttling mechanism for you to not get banned. Source: over 1 year ago
Not only Windows, you can also use it on Mac and Linux too. But for Python and CLI, you can use scrapy. Source: over 1 year ago
The first step in automating Amazon price tracking with Python is to scrape the product pages of Amazon.com for the desired product. To do this, you can use a web scraping library like BeautifulSoup or Scrapy. In the following example, we will use BeautifulSoup to scrape the product page for a MacBook Pro on Amazon.com:. - Source: dev.to / over 1 year ago
Lots of good suggestions here -- wanted to suggest the python tool, https://scrapy.org. Source: over 1 year ago
Scrapy – one of the most comprehensive web scraping frameworks available for Python developers. Scrapy was designed around speed, reliability and extensibility – allowing users to quickly extract data from websites with minimal effort thanks to its powerful spiders that automatically traverse through whole sites from page-to-page until all relevant information has been scraped off them. Source: over 1 year ago
Not to go full "Dropbox in a weekend", but if you're technical enough to self-host, this is something you can build for yourself Everyone is going straight to embeddings, but it'd be easy enough to use old school NLP summarization from NLTK (https://www.nltk.org/) Hook that up a web scraping library like https://scrapy.org/ and get a summary of each page. Then embed a site map in your system prompt and use... - Source: Hacker News / over 1 year ago
In general celery tasks should be idempotent if possible, for scraping consider if Scrapy might not be more appropriate, it already implements a lot of the rate limiting/retrying you have to replicate in celery yourself. But regarding locking you are right to consider databases/redis since celery workers might run on entirely different machines even. In the case of a paginated scrape with celery, you could... Source: over 1 year ago
You can use automation tools like Selenium or Playwright. You can work with a full-fledged framework such as Scrapy. I also recently discovered a Python tool like selectolax Lexbor, which allows you to extract data very quickly. Source: over 1 year ago
This is not related to https://scrapy.org/ and so not related to this subreddit either. Source: over 1 year ago
The sha256 is there establish the uniqueness of the file. It isn’t great for capturing whether or not you have already seen the file before, tho, because it is rather expensive to calculate (imagine your csv file were gigabytes on size — you would have to stream in whole file down in order to see if it had changed!). In the past I have used a sha256 of information that the server hosting the file gives me about... Source: over 1 year ago
You may want to check out [estela](https://estela.bitmaker.la/docs/), which is a spider management solution, developed by [Bitmaker](https://bitmaker.la) that allows you to run [Scrapy](https://scrapy.org) spiders. Source: over 1 year ago
Hi, in this case the data is in the html itself (no data.json). You can use this xpath to get the data: //div[@id="vue-match-centre"]/@q-data There are many ways to get this info, the one my company uses is the scrapy framework. Here is some code that uses scrapy to get this data into a json file:. Source: over 1 year ago
I recommend using Scrapy as that is what we use at my place of work, Bitmaker (bitmaker.la). An example spider would look like this:. Source: over 1 year ago
In the past I used stuff like beautifulsoup for webscraping but I’ve heard good things about https://scrapy.org/. Source: over 1 year ago
Do you know an article comparing Scrapy to other products?
Suggest a link to a post with product alternatives.
This is an informative page about Scrapy. You can review and discuss the product here. The primary details have not been verified within the last quarter, and they might be outdated. If you think we are missing something, please use the means on this page to comment or suggest changes. All reviews and comments are highly encouranged and appreciated as they help everyone in the community to make an informed choice. Please always be kind and objective when evaluating a product and sharing your opinion.