4.7 KiB
WikiTeam
We archive wikis, from Wikipedia to tiniest wikis
WikiTeam software is a set of tools for archiving wikis. They work on MediaWiki wikis, but we want to expand to other wiki engines. As of June 2014, WikiTeam has preserved more than 13,000 stand-alone wikis, several wikifarms, regular Wikipedia dumps and 24TB of Wikimedia Commons images.
There are thousands of wikis in the Internet. Every day some of them are no longer publicly available and, due to lack of backups, lost forever. Millions of people download tons of media files (movies, music, books, etc) from the Internet, serving as a kind of distributed backup. Wikis, most of them under free licenses, disappear from time to time because nobody grabbed a copy of them. That is a shame that we would like to solve.
WikiTeam is the Archive Team (GitHub) subcommittee on wikis. It was founded and originally developed by Emilio J. Rodríguez-Posada, a Wikipedia veteran editor and amateur archivist. Many people have helped by sending suggestions, reporting bugs, writing documentation, providing help in the mailing list and making wiki backups. Thanks to all, especially to: Federico Leva, Alex Buie, Scott Boyd, Hydriz, Platonides, Ian McEwen, Mike Dupont and balrog.
Quick guide
This is a very quick guide for the most used features of WikiTeam tools. For further information, read the tutorial and the rest of the documentation. You can also ask in the mailing list.
Download any wiki
To download any wiki, first of all confirm you satisfy requirements:
pip install --upgrade -r requirements.txt
or, if you don't have enough permissions for the above,
pip install --user --upgrade -r requirements.txt
Then use one of the following options:
python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --xml --images
(complete XML histories and images)
python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --xml
(complete XML histories)
python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --xml --curonly
(only current version of every page)
You can resume an aborted download:
python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --xml --images --resume --path=/path/to/incomplete-dump
See more options:
python dumpgenerator.py --help
Download Wikimedia dumps
For downloading Wikimedia XML dumps (Wikipedia, Wikibooks, Wikinews, etc):
python wikipediadownloader.py
(download all projects)
See more options:
python wikipediadownloader.py --help
Download Wikimedia Commons images
There is a script for this, but we have uploaded the tarballs to Internet Archive, so it's more useful to reseed their torrents than to re-generate old ones with the script.