clean_sandbox.py would do the job.
xqt
----- Original Nachricht ---- Von: Tom Hutchison tjhutchy96@optonline.net An: info@gno.de Datum: 01.09.2010 18:08 Betreff: Re: Re: [Pywikipedia-l] W.: Aw: Re: Need some help sorting out a couple of issues
Thanks so much for your help.
One of the main things I am trying to do-create a bot to rake the sandboxes of the wikis I admin. Is there a command to erase and reset the page completely to predetermined text, which I setup as a template.
Tom
----- Original Message ----- From: info@gno.de To: tjhutchy96@optonline.net Cc: pywikipedia-l@lists.wikimedia.org Sent: Wednesday, September 01, 2010 11:08 AM Subject: Aw: Re: [Pywikipedia-l] W.: Aw: Re: Need some help sorting out a couple of issues
:)
btw: you may olso use the rewrite branch of the pywikibot. This framework always use API and the most needed scripts are merged to this branch.
greetings
xqt
http://toolserver.org/~pywikipedia/nightly/
----- Original Nachricht ---- Von: Tom Hutchison tjhutchy96@optonline.net An: Pywikipedia discussion list pywikipedia-l@lists.wikimedia.org Datum: 01.09.2010 16:25 Betreff: Re: [Pywikipedia-l] W.: Aw: Re: Need some help sorting out a couple of issues
Genius!
As soon as I saw it was calling the Special:Export in your email I knew
what
the problem was. I thought it was calling the API to export pag
I had a permission block in export to all but sysop class. Added 'bot' to the array and it is now working!
Tom Sent from my iPhone
On Sep 1, 2010, at 6:08 AM, info@gno.de wrote:
Your Bot tries to receive some pages through Special:Export and expect
a
</mediawiki>-tag on the bottom of the returned page. The method this
message
occured is _GetAll.run() of the _GetAll class. First check whether there
is
a special_page_limit defined in your user config. Try to decrease this value; try with a low value like special_page_limit = 20 (should be devidable by 4) and check whether this error occures again (and increase gradually if it works). You may try to debug this part reading the
received
data via idle editor. There is also an alternative to read these pages
via
API instead of this special page. You must use the -debug option to use
it
but be aware this is under construction and I've found that a lot of page titles given to the API are missed by its received data. This caused me
to
deactivate this feature again for now.
xqt
----- Original Nachricht ---- Von: Tom Hutchison tjhutchy96@optonline.net An: Pywikipedia discussion list
pywikipedia-l@lists.wikimedia.org
Datum: 01.09.2010 08:46 Betreff: Re: [Pywikipedia-l] Need some help sorting out a couple of
issues
Anyone have any ideas on why I am getting the "Received incomplete XML
data"
error message?
I ran some of the API links manually in my browser, all the created
XML
looks just fine. Can someone at least explain what triggers incomplete
XML
data error? What part of pywikipedia reads the XML data? Is it not
being
called correctly? Is my server the issue? I am running PHP 5 in
Advanced
mode.
I would appreciate any ideas or suggestions.
Thanks Tom
Pywikipedia-l mailing list Pywikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
-- Der Newskiosk Von Sommerloch keine Spur: Alle Top-News der großen Tageszeitungen aus
Wirtschaft, Politik, Sport, Lifestyle und mehr im News-Kiosk auf
arcor.de.
http://www.arcor.de/rd/footer.newskiosk
-- Der Newskiosk Von Sommerloch keine Spur: Alle Top-News der großen Tageszeitungen aus
Wirtschaft, Politik, Sport, Lifestyle und mehr im News-Kiosk auf
arcor.de.
http://www.arcor.de/rd/footer.newskiosk
Pywikipedia-l mailing list Pywikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
Pywikipedia-l mailing list Pywikipedia-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
-- Der Newskiosk Von Sommerloch keine Spur: Alle Top-News der großen Tageszeitungen aus Wirtschaft, Politik, Sport, Lifestyle und mehr im News-Kiosk auf arcor.de. http://www.arcor.de/rd/footer.newskiosk
pywikipedia-l@lists.wikimedia.org