So this is the way? If it works, I will use your idea. :-))))
Bugs item #3060262, was opened at 2010-09-06 11:25
> Message generated for change (Settings changed) made by wikimercy
>
> >Priority: 9
>
> ignoreTemplates = {
> 'sv' : [u'commonscatbox'],
>
--
Bináris
Hi Russel,
the main reason not to join to the rewrite branch is, I did not got it running yet. I get an importError for simplejson. And I have no idea seting PYTHONPATH playing with idle. Whereas the trunk is easy to use: install python, download the bot and expand it, run it. This is the usability I would expect.
Most of the scripts are out of date since they are modified in trunk but not actualized at rewrite. I guess both forks have to be developed in parallel for a while until all (main) scripts are merged. I could supporting the rewrite development but since I could not test that stuff I wouldn't.
However, I have reservations about the effect that the development for older mw versions are cut.
Regards
----- Original Nachricht ----
Von: Russell Blau <russblau(a)imapmail.org>
An: Pywikipedia discussion list <pywikipedia-l(a)lists.wikimedia.org>
Datum: 30.03.2010 16:18
Betreff: [Pywikipedia-l] Request for feedback on rewrite branch
> I am at a point where it would be helpful to have some feedback from other
> Pywikipedia users about the future of the rewrite branch. As those who
> watch the SVN commits know, I have not had as much time to work on this
> lately, and have to prioritize what time I do spend on it.
>
> For those who have used the rewrite branch, what (if anything) needs to be
> done to it to get you to use it exclusively and retire the old wikipedia.py
>
> system? What is missing? What is broken? What is present but could be
> improved?
>
> For those who have chosen not to use the rewrite branch, why not? What
> might lead you to take another look?
>
> And then, I'm sure there are many whose reaction to this post has been,
> "What's the rewrite branch?" I don't know what to ask you, so feel free to
>
> move on to the next message.
>
> Most critically, is there any reason to continue development of the trunk
> once the rewrite branch is at a point where most users are ready to switch
> to it?
>
> -- Russ
>
>
> _______________________________________________
> Pywikipedia-l mailing list
> Pywikipedia-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
>
Traumziele - von Beschreibung bis Buchung jetzt kompakt auf den Reise-Seiten von Arcor.de! http://www.arcor.de/rd/footer.reise
clean_sandbox.py would do the job.
xqt
----- Original Nachricht ----
Von: Tom Hutchison <tjhutchy96(a)optonline.net>
An: info(a)gno.de
Datum: 01.09.2010 18:08
Betreff: Re: Re: [Pywikipedia-l] W.: Aw: Re: Need some help sorting out a
couple of issues
> Thanks so much for your help.
>
> One of the main things I am trying to do-create a bot to rake the sandboxes
> of the wikis I admin. Is there a command to erase and reset the page
> completely to predetermined text, which I setup as a template.
>
> Tom
>
> ----- Original Message -----
> From: <info(a)gno.de>
> To: <tjhutchy96(a)optonline.net>
> Cc: <pywikipedia-l(a)lists.wikimedia.org>
> Sent: Wednesday, September 01, 2010 11:08 AM
> Subject: Aw: Re: [Pywikipedia-l] W.: Aw: Re: Need some help sorting out a
> couple of issues
>
>
> :)
>
> btw: you may olso use the rewrite branch of the pywikibot. This framework
> always use API and the most needed scripts are merged to this branch.
>
> greetings
>
> xqt
>
> http://toolserver.org/~pywikipedia/nightly/
>
>
> ----- Original Nachricht ----
> Von: Tom Hutchison <tjhutchy96(a)optonline.net>
> An: Pywikipedia discussion list <pywikipedia-l(a)lists.wikimedia.org>
> Datum: 01.09.2010 16:25
> Betreff: Re: [Pywikipedia-l] W.: Aw: Re: Need some help sorting out a
> couple
> of issues
>
> > Genius!
> >
> > As soon as I saw it was calling the Special:Export in your email I knew
> what
> > the problem was. I thought it was calling the API to export pag
> >
> > I had a permission block in export to all but sysop class. Added 'bot' to
> > the array and it is now working!
> >
> > Tom
> > Sent from my iPhone
> >
> > On Sep 1, 2010, at 6:08 AM, info(a)gno.de wrote:
> >
> > > Your Bot tries to receive some pages through Special:Export and expect
> a
> > </mediawiki>-tag on the bottom of the returned page. The method this
> message
> > occured is _GetAll.run() of the _GetAll class. First check whether there
> is
> > a special_page_limit defined in your user config. Try to decrease this
> > value; try with a low value like special_page_limit = 20 (should be
> > devidable by 4) and check whether this error occures again (and increase
> > gradually if it works). You may try to debug this part reading the
> received
> > data via idle editor. There is also an alternative to read these pages
> via
> > API instead of this special page. You must use the -debug option to use
> it
> > but be aware this is under construction and I've found that a lot of page
> > titles given to the API are missed by its received data. This caused me
> to
> > deactivate this feature again for now.
> > >
> > > xqt
> > >
> > > ----- Original Nachricht ----
> > > Von: Tom Hutchison <tjhutchy96(a)optonline.net>
> > > An: Pywikipedia discussion list
> <pywikipedia-l(a)lists.wikimedia.org>
> > > Datum: 01.09.2010 08:46
> > > Betreff: Re: [Pywikipedia-l] Need some help sorting out a couple of
> issues
> > >
> > >> Anyone have any ideas on why I am getting the "Received incomplete XML
> > data"
> > >> error message?
> > >>
> > >> I ran some of the API links manually in my browser, all the created
> XML
> > >> looks just fine. Can someone at least explain what triggers incomplete
> > XML
> > >> data error? What part of pywikipedia reads the XML data? Is it not
> being
> > >> called correctly? Is my server the issue? I am running PHP 5 in
> Advanced
> > >> mode.
> > >>
> > >> I would appreciate any ideas or suggestions.
> > >>
> > >> Thanks
> > >> Tom
> > >>
> > >>
> > >>
> > >> --------------------------------
> > >>
> > >> _______________________________________________
> > >> Pywikipedia-l mailing list
> > >> Pywikipedia-l(a)lists.wikimedia.org
> > >> https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
> > >>
> > >
> > > --
> > > Der Newskiosk
> > > Von Sommerloch keine Spur: Alle Top-News der großen Tageszeitungen aus
> > Wirtschaft, Politik, Sport, Lifestyle und mehr im News-Kiosk auf
> arcor.de.
> > > http://www.arcor.de/rd/footer.newskiosk
> > >
> > >
> > > --
> > > Der Newskiosk
> > > Von Sommerloch keine Spur: Alle Top-News der großen Tageszeitungen aus
> > Wirtschaft, Politik, Sport, Lifestyle und mehr im News-Kiosk auf
> arcor.de.
> > > http://www.arcor.de/rd/footer.newskiosk
> > >
> > > _______________________________________________
> > > Pywikipedia-l mailing list
> > > Pywikipedia-l(a)lists.wikimedia.org
> > > https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
> >
> > _______________________________________________
> > Pywikipedia-l mailing list
> > Pywikipedia-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
> >
>
> --
> Der Newskiosk
> Von Sommerloch keine Spur: Alle Top-News der großen Tageszeitungen aus
> Wirtschaft, Politik, Sport, Lifestyle und mehr im News-Kiosk auf arcor.de.
> http://www.arcor.de/rd/footer.newskiosk
>
--
Der Newskiosk
Von Sommerloch keine Spur: Alle Top-News der großen Tageszeitungen aus Wirtschaft, Politik, Sport, Lifestyle und mehr im News-Kiosk auf arcor.de.
http://www.arcor.de/rd/footer.newskiosk
Your Bot tries to receive some pages through Special:Export and expect a </mediawiki>-tag on the bottom of the returned page. The method this message occured is _GetAll.run() of the _GetAll class. First check whether there is a special_page_limit defined in your user config. Try to decrease this value; try with a low value like special_page_limit = 20 (should be devidable by 4) and check whether this error occures again (and increase gradually if it works). You may try to debug this part reading the received data via idle editor. There is also an alternative to read these pages via API instead of this special page. You must use the -debug option to use it but be aware this is under construction and I've found that a lot of page titles given to the API are missed by its received data. This caused me to deactivate this feature again for now.
xqt
----- Original Nachricht ----
Von: Tom Hutchison <tjhutchy96(a)optonline.net>
An: Pywikipedia discussion list <pywikipedia-l(a)lists.wikimedia.org>
Datum: 01.09.2010 08:46
Betreff: Re: [Pywikipedia-l] Need some help sorting out a couple of issues
> Anyone have any ideas on why I am getting the "Received incomplete XML data"
> error message?
>
> I ran some of the API links manually in my browser, all the created XML
> looks just fine. Can someone at least explain what triggers incomplete XML
> data error? What part of pywikipedia reads the XML data? Is it not being
> called correctly? Is my server the issue? I am running PHP 5 in Advanced
> mode.
>
> I would appreciate any ideas or suggestions.
>
> Thanks
> Tom
>
>
>
> --------------------------------
>
> _______________________________________________
> Pywikipedia-l mailing list
> Pywikipedia-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
>
--
Der Newskiosk
Von Sommerloch keine Spur: Alle Top-News der großen Tageszeitungen aus Wirtschaft, Politik, Sport, Lifestyle und mehr im News-Kiosk auf arcor.de.
http://www.arcor.de/rd/footer.newskiosk
--
Der Newskiosk
Von Sommerloch keine Spur: Alle Top-News der großen Tageszeitungen aus Wirtschaft, Politik, Sport, Lifestyle und mehr im News-Kiosk auf arcor.de.
http://www.arcor.de/rd/footer.newskiosk
I need some help sorting out a couple of issues with pywikipedia.
Everything is on an XP machine. Python 2.7, with the latest build of
pywikipedia from 8/30/2010. From all appearances everything is correct.
I can log in and run some scripts without a problem. I ran catall.py
without any issues, changed categories on pages. However.....!
For some reason I keep getting this error on many of the scripts,
category.py, weblinkchecker.py, interwiki.py, etc
"Received incomplete XML data" then the sleep count starts.
I am at a loss as to what is happening. I can even edit with the GUI
interface without a problem.
Can someone help me? What am I missing?
Tom