Is the common crawl index  not being used by search engines? Could someone chime in as to its relative anonymity in many such articles.  https://commoncrawl.org/. - Source: Hacker News / about 1 month ago
Not sure why http://commoncrawl.org/ wasn't mentioned. - Source: Hacker News / 21 days ago
Yes, you would use crawlers to download the photos/videos, to then locally create a database of their feature vectors. You might want to take a look at The Common Crawl which is an open source database of billions of websites. You can download that database of urls and then crawl those urls for photos/videos. - Source: Reddit / 6 days ago
Apache Nutch - Apache Nutch is a highly extensible and scalable open source web crawler software project.
Scrapy - Scrapy | A Fast and Powerful Scraping and Web Crawling Framework
StormCrawler - StormCrawler is an open source SDK for building distributed web crawlers with Apache Storm.
Apify - Apify is a web scraping and automation platform that can turn any website into an API.
Heritrix - Heritrix is the Internet Archive's open-source, extensible, web-scale, archival-quality web...
DuckDuckGo - The Internet privacy company that empowers you to seamlessly take control of your personal information online, without any tradeoffs.