ArchiveBox is a powerful, self-hosted internet archiving solution to collect, save, and view sites you want to preserve offline.
You can set it up as a command-line tool, web app, and desktop app (alpha), on Linux, macOS, and Windows.
You can feed it URLs one at a time, or schedule regular imports from browser bookmarks or history, feeds like RSS, bookmark services like Pocket/Pinboard, and more. See input formats for a full list.
It saves snapshots of the URLs you feed it in several formats: HTML, PDF, PNG screenshots, WARC, and more out-of-the-box, with a wide variety of content extracted and preserved automatically (article text, audio/video, git repos, etc.). See output formats for a full list.
The goal is to sleep soundly knowing the part of the internet you care about will be automatically preserved in durable, easily accessible formats for decades after it goes down.
ArchiveBox's answer:
ArchiveBox's answer:
ArchiveBox's answer:
ArchiveBox aims to enable more of the internet to be saved from deterioration by empowering people to self-host their own archives. The intent is for all the web content you care about to be viewable with common software in 50 - 100 years without needing to run ArchiveBox or other specialized software to replay it.
Vast treasure troves of knowledge are lost every day on the internet to link rot. As a society, we have an imperative to preserve some important parts of that treasure, just like we preserve our books, paintings, and music in physical libraries long after the originals go out of print or fade into obscurity.
Whether it's to resist censorship by saving articles before they get taken down or edited, or just to save a collection of early 2010's flash games you love to play, having the tools to archive internet content enables to you save the stuff you care most about before it disappears.
Image from WTF is Link Rot?... The balance between the permanence and ephemeral nature of content on the internet is part of what makes it beautiful. I don't think everything should be preserved in an automated fashion--making all content permanent and never removable, but I do think people should be able to decide for themselves and effectively archive specific content that they care about.
Because modern websites are complicated and often rely on dynamic content, ArchiveBox archives the sites in several different formats beyond what public archiving services like Archive.org/Archive.is save. Using multiple methods and the market-dominant browser to execute JS ensures we can save even the most complex, finicky websites in at least a few high-quality, long-term data formats.
ArchiveBox's answer:
ArchiveBox differentiates itself from similar self-hosted projects by providing both a comprehensive CLI interface for managing your archive, a Web UI that can be used either independently or together with the CLI, and a simple on-disk data format that can be used without either.
ArchiveBox is neither the highest fidelity nor the simplest tool available for self-hosted archiving, rather it's a jack-of-all-trades that tries to do most things well by default. It can be as simple or advanced as you want, and is designed to do everything out-of-the-box but be tuned to suit your needs.
If you want better fidelity for very complex interactive pages with heavy JS/streams/API requests, check out ArchiveWeb.page and ReplayWeb.page.
If you want more bookmark categorization and note-taking features, check out Archivy, Memex, Polar, or LinkAce.
If you need more advanced recursive spider/crawling ability beyond --depth=1, check out Browsertrix, Photon, or Scrapy and pipe the outputted URLs into ArchiveBox.
ArchiveBox's answer:
Based on our record, ArchiveBox seems to be a lot more popular than HTTrack. While we know about 89 links to ArchiveBox, we've tracked only 2 mentions of HTTrack. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
I Use httrack.com software to download my websites. I am aware of the following software: Ubooquity, TrueNas, Kavita, Plex, MediaMonkey, Jellyfin and MusicBrainzPicard. Source: over 2 years ago
- I do get the entire text through RSS successfully. - Turns out InoReader caches everything and will never delete it which is why they have it fetched so far back.. - I tried httrack.com without success. - I'm thinking about someone coding something to download each feed post as a PDF, which InoReader gives as a download option for single items. Source: over 3 years ago
2. Drop the link into my instance of ArchiveBox [0] and will return to it a few weeks/months later or, more often than not, never again [0] https://archivebox.io/. - Source: Hacker News / 4 months ago
Is anyone using ArchiveBox regularly? It's a self-hosted archiving solution. Not the ambitious decentralized system I think this comment is thinking of but a practical way for someone to run an archive for themselves. https://archivebox.io/. - Source: Hacker News / 6 months ago
I used to solely depend on Wayback machine to automate archiving pages, now I am archiving webpages using selenium python package on https://archive.ph/ and https://ghostarchive.org/ This told me not to depend on 3rd party parties. Might self-host https://archivebox.io/. - Source: Hacker News / 7 months ago
Https://archivebox.io/ For those with an interest. My del.icio.us collection (what was still online) lives on my NAS now. - Source: Hacker News / 7 months ago
And one nice tool for scraping archives for yourself is https://archivebox.io/ a nice frontend by https://news.ycombinator.com/user?id=nikisweeting. - Source: Hacker News / 11 months ago
WebCopy - Cyotek WebCopy is a free tool for copying full or partial websites locally onto your harddisk for offline viewing.
wallabag - Save the web, freely.
SiteSucker - SiteSucker is a Macintosh application that automatically downloads Web sites from the Internet.
Raindrop.io - All your articles, photos, video & content from web & apps in one place.
GNU Wget - GNU Wget is a free software package for retrieving files using HTTP(S) and FTP, the most...
Archive.org - Internet Archive is a non-profit digital library offering free universal access to books, movies...