* For some reason, in a previous commit I had noticed that maxretries
was not respected in getXMLPageCore, but I didn't fix it. Done now.
* If the "Special" namespace alias doesn't work, fetch the local one.
My previous code broke the page missing detection code with two negative
outcomes:
- missing pages were not reported in the error log
- ever missing page generated an extraneous "</page>" line in output which
rendered dumps invalid
This patch improves the exception code in general and fixes both of these
issues.
Bitrot seems to have gotten the best of this script and it sounds like it
hasn't been used. This at least gets it to work by:
- find both .gz and the .7z dumps
- parse the new date format on html
- find dumps in the correct place
- move all chatter to stderr instead of stdout
The test is failing. https://travis-ci.org/WikiTeam/wikiteam/builds/50102997#L546
Might be our fault, but they just updated code:
Tyrian – (f313f23) 12:47, 23 January 2015 GPLv3+ Gentoo's new web theme ported to MediaWiki. Alex Legler
I don't think testing screenscraping against a theme used only by Gentoo makes much sense for us.
The Wikia API is exporting sha1 sums as part of the response for pages.
These are invalid XML and are causing dump parsing code (e.g.,
MediaWiki-Utilities) to fail. Also, sha1 should be revisions, not pages so
it's not entirely clear to me what this is referring to.
The program tended to run out of memory when processing very large pages (i.e.,
pages with extremely large numbers of revisions or pages with large numbers of
very large revisions). This mitigates the problem by changing getXMLPage() into
a generator which allows us to write pages after each request to the API.
This requied changes to the getXMLPage() function and also changes to other
parts of the code that called it.
Additionally, when the function was called, it's text was checked in several
ways. This required a few changes including a running tally of revisions
instead of post hoc check and it required error checking being moved into a
Exception rather than just an if statement that looked at the final result.
The list of unarchived wikis was compared to the list of wikis that we
managed to download with dumpgenerator.py:
https://archive.org/details/wikia_dump_20141219
To allow the comparison, the naming format was aligned to the format
used by dumpgenerator.py for 7z files.