From 5616b920d7b03fd941f58a165731ce6d8899bd53 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Emilio=20J=2E=20Rodr=C3=ADguez-Posada?= Date: Sun, 13 Jul 2014 13:03:46 +0200 Subject: [PATCH] updating instructions for using just the domain name --- README.md | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 871368e..e74b7df 100644 --- a/README.md +++ b/README.md @@ -39,11 +39,15 @@ or, if you don't have enough permissions for the above, To download any wiki, use one of the following options: -`python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --xml --images` (complete XML histories and images) +`python dumpgenerator.py http://wiki.domain.org --xml --images` (complete XML histories and images) -`python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --xml` (complete XML histories) +If the script can't find itself the API and/or index.php paths, then you can provide them: -`python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --xml --curonly` (only current version of every page) +`python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --xml --images` + +`python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --index=http://wiki.domain.org/w/index.php --xml --images` + +If you only want the XML histories use `--xml`. For only the images, just `--images`. For only the current version of every page, `--xml --curonly`. You can resume an aborted download: