Bugs item #2922193, was opened at 2009-12-28 11:56
Message generated for change (Comment added) made by masti01
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: interwiki
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: masti (masti01)
Assigned to: Nobody/Anonymous (nobody)
Summary: interwiki fails on pl.wikibooks
Initial Comment:
while running interwiki.py on pl.wiikibooks the script goes thru pages but from time to time it stalls with: "Received incomplete XML data. Sleeping for 15 seconds..." error and stays like that forever
example:
$python pywikipedia/interwiki.py -start:"H. K. T." -auto
NOTE: Number of pages queued is 0, trying to add 60 more.
Getting 60 pages from wikisource:pl...
[then some output about processing]
[and then ...]
======Post-processing [[pl:Ha! jeszcze o mnie...]]======
Updating links on page [[pl:Ha! jeszcze o mnie...]].
No changes needed
======Post-processing [[pl:H. K. T.]]======
Updating links on page [[pl:H. K. T.]].
No changes needed
NOTE: The first unfinished subject is [[pl:Had we never loved so kindly]]
NOTE: Number of pages queued is 10, trying to add 60 more.
Getting 60 pages from wikisource:pl...
Received incomplete XML data. Sleeping for 15 seconds...
Received incomplete XML data. Sleeping for 30 seconds...
Received incomplete XML data. Sleeping for 45 seconds...
Received incomplete XML data. Sleeping for 60 seconds...
Received incomplete XML data. Sleeping for 75 seconds...
Received incomplete XML data. Sleeping for 135 seconds...
Received incomplete XML data. Sleeping for 195 seconds...
Received incomplete XML data. Sleeping for 255 seconds...
Received incomplete XML data. Sleeping for 315 seconds...
and so on ...
python pywikipedia/version.py
Pywikipedia (r7830 (wikipedia.py), 2009/12/27, 14:20:21)
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)]
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 23:07
Message:
I have split the page, which was anyhow requested, and with pages approx.
800kB size bot works properly. So for this particular case workaround works
and I think we can close the issue. Although it looks like MediaWiki
problem not the pywikibot problem. Thank for pointing me to real cause :)
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 22:36
Message:
Maybe but I am not sure. I could try to get the requested Pages via API
instead of XML-exporting, but there is some irregular with eo-x-encoding.
This is the reason why this is not implemented yet except of some developer
hacks. Perhaps it will run next year ;)
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 20:27
Message:
after exporting i see that it end just after title and firs <id> tag:
<title>Historyja literatury angielskiej</title>
<id>14057</id>
One other thing. This page is huge 1469kB. Maybe this a problem with
exporting?
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 19:55
Message:
You can try to export this page. [[s:pl:Specjalna:Eksport]] It also is not
possible. Maybe the page is corrupt. We should ask at tech-channel.
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 19:49
Message:
You're right the output for this page is:
'\n <id>14057</id>\n'
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 19:35
Message:
I guess this is a problem of the page 'Historyja literatury angielskiej'.
You may try the statement below but with
page=wikipedia.Page(site, 'Historyja literatury angielskiej')
It won't work. But I've no idea for now.
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 15:51
Message:
Yes, it works:
[mst@pl37007 pywikipedia]$ python
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import wikipedia
>>> site = wikipedia.getSite('pl','wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T.')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
'/page>\n</mediawiki>\n'
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 09:16
Message:
Could you test the following statements in your python shell (idle):
>>> import wikipedia
>>> site = wikipedia.getSite('pl', 'wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
As result you should see this:
'einfo>\n</mediawiki>\n'
>>>
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-28 18:50
Message:
one thing was wrong: it's pl.wikisource not wikibooks. This error persist
since some months already, and it happens everytime I run a bot. So this
should not be due to server performance. I was runnig a bot on several
other projects as well today and this happens only on pl.wikisource. This
is just an example but the bot stalls on several different articles.
I am using API.
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-28 18:38
Message:
I found the server was very slow today for pl sites. I had a similar delay
on pl-wiki. Maybe it works late.
On the other hand you can try via the API. Does this work?
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Bugs item #2922193, was opened at 2009-12-28 11:56
Message generated for change (Comment added) made by xqt
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: interwiki
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: masti (masti01)
Assigned to: Nobody/Anonymous (nobody)
Summary: interwiki fails on pl.wikibooks
Initial Comment:
while running interwiki.py on pl.wiikibooks the script goes thru pages but from time to time it stalls with: "Received incomplete XML data. Sleeping for 15 seconds..." error and stays like that forever
example:
$python pywikipedia/interwiki.py -start:"H. K. T." -auto
NOTE: Number of pages queued is 0, trying to add 60 more.
Getting 60 pages from wikisource:pl...
[then some output about processing]
[and then ...]
======Post-processing [[pl:Ha! jeszcze o mnie...]]======
Updating links on page [[pl:Ha! jeszcze o mnie...]].
No changes needed
======Post-processing [[pl:H. K. T.]]======
Updating links on page [[pl:H. K. T.]].
No changes needed
NOTE: The first unfinished subject is [[pl:Had we never loved so kindly]]
NOTE: Number of pages queued is 10, trying to add 60 more.
Getting 60 pages from wikisource:pl...
Received incomplete XML data. Sleeping for 15 seconds...
Received incomplete XML data. Sleeping for 30 seconds...
Received incomplete XML data. Sleeping for 45 seconds...
Received incomplete XML data. Sleeping for 60 seconds...
Received incomplete XML data. Sleeping for 75 seconds...
Received incomplete XML data. Sleeping for 135 seconds...
Received incomplete XML data. Sleeping for 195 seconds...
Received incomplete XML data. Sleeping for 255 seconds...
Received incomplete XML data. Sleeping for 315 seconds...
and so on ...
python pywikipedia/version.py
Pywikipedia (r7830 (wikipedia.py), 2009/12/27, 14:20:21)
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)]
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 22:36
Message:
Maybe but I am not sure. I could try to get the requested Pages via API
instead of XML-exporting, but there is some irregular with eo-x-encoding.
This is the reason why this is not implemented yet except of some developer
hacks. Perhaps it will run next year ;)
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 20:27
Message:
after exporting i see that it end just after title and firs <id> tag:
<title>Historyja literatury angielskiej</title>
<id>14057</id>
One other thing. This page is huge 1469kB. Maybe this a problem with
exporting?
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 19:55
Message:
You can try to export this page. [[s:pl:Specjalna:Eksport]] It also is not
possible. Maybe the page is corrupt. We should ask at tech-channel.
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 19:49
Message:
You're right the output for this page is:
'\n <id>14057</id>\n'
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 19:35
Message:
I guess this is a problem of the page 'Historyja literatury angielskiej'.
You may try the statement below but with
page=wikipedia.Page(site, 'Historyja literatury angielskiej')
It won't work. But I've no idea for now.
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 15:51
Message:
Yes, it works:
[mst@pl37007 pywikipedia]$ python
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import wikipedia
>>> site = wikipedia.getSite('pl','wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T.')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
'/page>\n</mediawiki>\n'
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 09:16
Message:
Could you test the following statements in your python shell (idle):
>>> import wikipedia
>>> site = wikipedia.getSite('pl', 'wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
As result you should see this:
'einfo>\n</mediawiki>\n'
>>>
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-28 18:50
Message:
one thing was wrong: it's pl.wikisource not wikibooks. This error persist
since some months already, and it happens everytime I run a bot. So this
should not be due to server performance. I was runnig a bot on several
other projects as well today and this happens only on pl.wikisource. This
is just an example but the bot stalls on several different articles.
I am using API.
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-28 18:38
Message:
I found the server was very slow today for pl sites. I had a similar delay
on pl-wiki. Maybe it works late.
On the other hand you can try via the API. Does this work?
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Support Requests item #2923020, was opened at 2009-12-29 21:17
Message generated for change (Comment added) made by keyril
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603139&aid=2923020&group_…
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Install Problem
Group: None
Status: Open
Priority: 5
Private: No
Submitted By: Keyril (keyril)
Assigned to: Nobody/Anonymous (nobody)
Summary: Login problem
Initial Comment:
Sorry, but nothing i try works.
I just downloaded r7844, tried again the login.py, but failed with new error messages.
It should be easy:
* local installed MediaWiki 1.13.1
* Windows XP
* path: d:\xampp\htdocs\elysium
* Wikiname: Elysium
* user-config.py:
family = 'elysium'
mylang = 'en'
usernames['elysium']['en'] = 'WikiBot'
*
# -*- coding: utf-8 -*- # REQUIRED
import config, family, urllib # REQUIRED
class Family(family.Family): # REQUIRED
def __init__(self): # REQUIRED
family.Family.__init__(self) # REQUIRED
self.name = 'elysium' # REQUIRED; replace with actual name
self.langs = { # REQUIRED
'en': '127.0.0.1/elysium', # Include one line for each wiki in family
}
# IMPORTANT: if your wiki does not support the api.php interface,
# you must uncomment the second line of this method:
def apipath(self, code):
raise NotImplementedError, "%s wiki family does not support api.php" % self.name
return '%s/api.php' % self.scriptpath(code)
# Which version of MediaWiki is used? REQUIRED
def version(self, code):
# Replace with the actual version being run on your wiki
return '1.13.1'
def code2encoding(self, code):
"""Return the encoding for a specific language wiki"""
# Most wikis nowadays use UTF-8, but change this if yours uses
# a different encoding
return 'utf-8'
#def path(self, code):
# return '/elysium'
----------------------------------------------------------------------
>Comment By: Keyril (keyril)
Date: 2009-12-29 21:24
Message:
Sorry, hit the button too quick:
If i try above version of elysium_family.py i get error message for url
127.0.0.1/elysium/w/index.php...
How do i get rid of the "w"?
If i try and uncomment the lowest part with "def path" the path will be
127.0.0.1/elysium/elysium/index.php...
If i set self.langs to 127.0.0.1 without "elysium", the path should be
correct, but the error message is "wrong password or CAPTCHA answer"
I really don't know how to get it working.
Thanks in advance.
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603139&aid=2923020&group_…
Support Requests item #2923020, was opened at 2009-12-29 21:17
Message generated for change (Tracker Item Submitted) made by keyril
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603139&aid=2923020&group_…
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Install Problem
Group: None
Status: Open
Priority: 5
Private: No
Submitted By: Keyril (keyril)
Assigned to: Nobody/Anonymous (nobody)
Summary: Login problem
Initial Comment:
Sorry, but nothing i try works.
I just downloaded r7844, tried again the login.py, but failed with new error messages.
It should be easy:
* local installed MediaWiki 1.13.1
* Windows XP
* path: d:\xampp\htdocs\elysium
* Wikiname: Elysium
* user-config.py:
family = 'elysium'
mylang = 'en'
usernames['elysium']['en'] = 'WikiBot'
*
# -*- coding: utf-8 -*- # REQUIRED
import config, family, urllib # REQUIRED
class Family(family.Family): # REQUIRED
def __init__(self): # REQUIRED
family.Family.__init__(self) # REQUIRED
self.name = 'elysium' # REQUIRED; replace with actual name
self.langs = { # REQUIRED
'en': '127.0.0.1/elysium', # Include one line for each wiki in family
}
# IMPORTANT: if your wiki does not support the api.php interface,
# you must uncomment the second line of this method:
def apipath(self, code):
raise NotImplementedError, "%s wiki family does not support api.php" % self.name
return '%s/api.php' % self.scriptpath(code)
# Which version of MediaWiki is used? REQUIRED
def version(self, code):
# Replace with the actual version being run on your wiki
return '1.13.1'
def code2encoding(self, code):
"""Return the encoding for a specific language wiki"""
# Most wikis nowadays use UTF-8, but change this if yours uses
# a different encoding
return 'utf-8'
#def path(self, code):
# return '/elysium'
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603139&aid=2923020&group_…
Support Requests item #2920435, was opened at 2009-12-24 02:42
Message generated for change (Comment added) made by keyril
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603139&aid=2920435&group_…
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Install Problem
Group: None
Status: Open
Priority: 5
Private: No
Submitted By: Nobody/Anonymous (nobody)
Assigned to: Nobody/Anonymous (nobody)
Summary: Cant login
Initial Comment:
I am running a private wiki that requires login to view pages and uses pretty urls.
When I run login.py and enter the correct password I am told "timed out
WARNING: Could not open 'http://sitename.com/wiki/index.php?title=Special:Userlogin&useskin=monobook…'. Maybe the server or
your connection is down. Retrying in 1 minutes..." . I checked the Apache access log and it says "POST /wiki/index.php?title=Special:Userlogin&useskin=monobook&action=submit HTTP/1.1" 302 -
Any help is greatly apppreciated.
----------------------------------------------------------------------
Comment By: Keyril (keyril)
Date: 2009-12-29 20:56
Message:
Hi! I've got the same error.
* I disabled the firewall, checked the status of wiki, mysql, apache
before, then tried it and got the same error message (url was another, of
course).
* I tried the url in Firefox and it worked (opened the Special:Userlogin
page).
* I tried it on two different PCs - one time xampp, one time manually set
up apache, php and mysql, but same error.
* I tried it with sysop account, too, but same error.
Any help is greatly appreciated, too!
Thanks in advance!
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603139&aid=2920435&group_…
Bugs item #2922193, was opened at 2009-12-28 11:56
Message generated for change (Comment added) made by masti01
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: interwiki
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: masti (masti01)
Assigned to: Nobody/Anonymous (nobody)
Summary: interwiki fails on pl.wikibooks
Initial Comment:
while running interwiki.py on pl.wiikibooks the script goes thru pages but from time to time it stalls with: "Received incomplete XML data. Sleeping for 15 seconds..." error and stays like that forever
example:
$python pywikipedia/interwiki.py -start:"H. K. T." -auto
NOTE: Number of pages queued is 0, trying to add 60 more.
Getting 60 pages from wikisource:pl...
[then some output about processing]
[and then ...]
======Post-processing [[pl:Ha! jeszcze o mnie...]]======
Updating links on page [[pl:Ha! jeszcze o mnie...]].
No changes needed
======Post-processing [[pl:H. K. T.]]======
Updating links on page [[pl:H. K. T.]].
No changes needed
NOTE: The first unfinished subject is [[pl:Had we never loved so kindly]]
NOTE: Number of pages queued is 10, trying to add 60 more.
Getting 60 pages from wikisource:pl...
Received incomplete XML data. Sleeping for 15 seconds...
Received incomplete XML data. Sleeping for 30 seconds...
Received incomplete XML data. Sleeping for 45 seconds...
Received incomplete XML data. Sleeping for 60 seconds...
Received incomplete XML data. Sleeping for 75 seconds...
Received incomplete XML data. Sleeping for 135 seconds...
Received incomplete XML data. Sleeping for 195 seconds...
Received incomplete XML data. Sleeping for 255 seconds...
Received incomplete XML data. Sleeping for 315 seconds...
and so on ...
python pywikipedia/version.py
Pywikipedia (r7830 (wikipedia.py), 2009/12/27, 14:20:21)
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)]
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 20:27
Message:
after exporting i see that it end just after title and firs <id> tag:
<title>Historyja literatury angielskiej</title>
<id>14057</id>
One other thing. This page is huge 1469kB. Maybe this a problem with
exporting?
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 19:55
Message:
You can try to export this page. [[s:pl:Specjalna:Eksport]] It also is not
possible. Maybe the page is corrupt. We should ask at tech-channel.
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 19:49
Message:
You're right the output for this page is:
'\n <id>14057</id>\n'
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 19:35
Message:
I guess this is a problem of the page 'Historyja literatury angielskiej'.
You may try the statement below but with
page=wikipedia.Page(site, 'Historyja literatury angielskiej')
It won't work. But I've no idea for now.
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 15:51
Message:
Yes, it works:
[mst@pl37007 pywikipedia]$ python
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import wikipedia
>>> site = wikipedia.getSite('pl','wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T.')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
'/page>\n</mediawiki>\n'
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 09:16
Message:
Could you test the following statements in your python shell (idle):
>>> import wikipedia
>>> site = wikipedia.getSite('pl', 'wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
As result you should see this:
'einfo>\n</mediawiki>\n'
>>>
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-28 18:50
Message:
one thing was wrong: it's pl.wikisource not wikibooks. This error persist
since some months already, and it happens everytime I run a bot. So this
should not be due to server performance. I was runnig a bot on several
other projects as well today and this happens only on pl.wikisource. This
is just an example but the bot stalls on several different articles.
I am using API.
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-28 18:38
Message:
I found the server was very slow today for pl sites. I had a similar delay
on pl-wiki. Maybe it works late.
On the other hand you can try via the API. Does this work?
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Bugs item #2922193, was opened at 2009-12-28 11:56
Message generated for change (Comment added) made by xqt
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: interwiki
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: masti (masti01)
Assigned to: Nobody/Anonymous (nobody)
Summary: interwiki fails on pl.wikibooks
Initial Comment:
while running interwiki.py on pl.wiikibooks the script goes thru pages but from time to time it stalls with: "Received incomplete XML data. Sleeping for 15 seconds..." error and stays like that forever
example:
$python pywikipedia/interwiki.py -start:"H. K. T." -auto
NOTE: Number of pages queued is 0, trying to add 60 more.
Getting 60 pages from wikisource:pl...
[then some output about processing]
[and then ...]
======Post-processing [[pl:Ha! jeszcze o mnie...]]======
Updating links on page [[pl:Ha! jeszcze o mnie...]].
No changes needed
======Post-processing [[pl:H. K. T.]]======
Updating links on page [[pl:H. K. T.]].
No changes needed
NOTE: The first unfinished subject is [[pl:Had we never loved so kindly]]
NOTE: Number of pages queued is 10, trying to add 60 more.
Getting 60 pages from wikisource:pl...
Received incomplete XML data. Sleeping for 15 seconds...
Received incomplete XML data. Sleeping for 30 seconds...
Received incomplete XML data. Sleeping for 45 seconds...
Received incomplete XML data. Sleeping for 60 seconds...
Received incomplete XML data. Sleeping for 75 seconds...
Received incomplete XML data. Sleeping for 135 seconds...
Received incomplete XML data. Sleeping for 195 seconds...
Received incomplete XML data. Sleeping for 255 seconds...
Received incomplete XML data. Sleeping for 315 seconds...
and so on ...
python pywikipedia/version.py
Pywikipedia (r7830 (wikipedia.py), 2009/12/27, 14:20:21)
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)]
----------------------------------------------------------------------
>Comment By: xqt (xqt)
Date: 2009-12-29 19:55
Message:
You can try to export this page. [[s:pl:Specjalna:Eksport]] It also is not
possible. Maybe the page is corrupt. We should ask at tech-channel.
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 19:49
Message:
You're right the output for this page is:
'\n <id>14057</id>\n'
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 19:35
Message:
I guess this is a problem of the page 'Historyja literatury angielskiej'.
You may try the statement below but with
page=wikipedia.Page(site, 'Historyja literatury angielskiej')
It won't work. But I've no idea for now.
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 15:51
Message:
Yes, it works:
[mst@pl37007 pywikipedia]$ python
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import wikipedia
>>> site = wikipedia.getSite('pl','wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T.')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
'/page>\n</mediawiki>\n'
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 09:16
Message:
Could you test the following statements in your python shell (idle):
>>> import wikipedia
>>> site = wikipedia.getSite('pl', 'wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
As result you should see this:
'einfo>\n</mediawiki>\n'
>>>
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-28 18:50
Message:
one thing was wrong: it's pl.wikisource not wikibooks. This error persist
since some months already, and it happens everytime I run a bot. So this
should not be due to server performance. I was runnig a bot on several
other projects as well today and this happens only on pl.wikisource. This
is just an example but the bot stalls on several different articles.
I am using API.
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-28 18:38
Message:
I found the server was very slow today for pl sites. I had a similar delay
on pl-wiki. Maybe it works late.
On the other hand you can try via the API. Does this work?
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Bugs item #2922193, was opened at 2009-12-28 11:56
Message generated for change (Comment added) made by masti01
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: interwiki
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: masti (masti01)
Assigned to: Nobody/Anonymous (nobody)
Summary: interwiki fails on pl.wikibooks
Initial Comment:
while running interwiki.py on pl.wiikibooks the script goes thru pages but from time to time it stalls with: "Received incomplete XML data. Sleeping for 15 seconds..." error and stays like that forever
example:
$python pywikipedia/interwiki.py -start:"H. K. T." -auto
NOTE: Number of pages queued is 0, trying to add 60 more.
Getting 60 pages from wikisource:pl...
[then some output about processing]
[and then ...]
======Post-processing [[pl:Ha! jeszcze o mnie...]]======
Updating links on page [[pl:Ha! jeszcze o mnie...]].
No changes needed
======Post-processing [[pl:H. K. T.]]======
Updating links on page [[pl:H. K. T.]].
No changes needed
NOTE: The first unfinished subject is [[pl:Had we never loved so kindly]]
NOTE: Number of pages queued is 10, trying to add 60 more.
Getting 60 pages from wikisource:pl...
Received incomplete XML data. Sleeping for 15 seconds...
Received incomplete XML data. Sleeping for 30 seconds...
Received incomplete XML data. Sleeping for 45 seconds...
Received incomplete XML data. Sleeping for 60 seconds...
Received incomplete XML data. Sleeping for 75 seconds...
Received incomplete XML data. Sleeping for 135 seconds...
Received incomplete XML data. Sleeping for 195 seconds...
Received incomplete XML data. Sleeping for 255 seconds...
Received incomplete XML data. Sleeping for 315 seconds...
and so on ...
python pywikipedia/version.py
Pywikipedia (r7830 (wikipedia.py), 2009/12/27, 14:20:21)
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)]
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 19:49
Message:
You're right the output for this page is:
'\n <id>14057</id>\n'
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 19:35
Message:
I guess this is a problem of the page 'Historyja literatury angielskiej'.
You may try the statement below but with
page=wikipedia.Page(site, 'Historyja literatury angielskiej')
It won't work. But I've no idea for now.
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 15:51
Message:
Yes, it works:
[mst@pl37007 pywikipedia]$ python
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import wikipedia
>>> site = wikipedia.getSite('pl','wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T.')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
'/page>\n</mediawiki>\n'
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 09:16
Message:
Could you test the following statements in your python shell (idle):
>>> import wikipedia
>>> site = wikipedia.getSite('pl', 'wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
As result you should see this:
'einfo>\n</mediawiki>\n'
>>>
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-28 18:50
Message:
one thing was wrong: it's pl.wikisource not wikibooks. This error persist
since some months already, and it happens everytime I run a bot. So this
should not be due to server performance. I was runnig a bot on several
other projects as well today and this happens only on pl.wikisource. This
is just an example but the bot stalls on several different articles.
I am using API.
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-28 18:38
Message:
I found the server was very slow today for pl sites. I had a similar delay
on pl-wiki. Maybe it works late.
On the other hand you can try via the API. Does this work?
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Bugs item #2922193, was opened at 2009-12-28 11:56
Message generated for change (Comment added) made by xqt
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: interwiki
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: masti (masti01)
Assigned to: Nobody/Anonymous (nobody)
Summary: interwiki fails on pl.wikibooks
Initial Comment:
while running interwiki.py on pl.wiikibooks the script goes thru pages but from time to time it stalls with: "Received incomplete XML data. Sleeping for 15 seconds..." error and stays like that forever
example:
$python pywikipedia/interwiki.py -start:"H. K. T." -auto
NOTE: Number of pages queued is 0, trying to add 60 more.
Getting 60 pages from wikisource:pl...
[then some output about processing]
[and then ...]
======Post-processing [[pl:Ha! jeszcze o mnie...]]======
Updating links on page [[pl:Ha! jeszcze o mnie...]].
No changes needed
======Post-processing [[pl:H. K. T.]]======
Updating links on page [[pl:H. K. T.]].
No changes needed
NOTE: The first unfinished subject is [[pl:Had we never loved so kindly]]
NOTE: Number of pages queued is 10, trying to add 60 more.
Getting 60 pages from wikisource:pl...
Received incomplete XML data. Sleeping for 15 seconds...
Received incomplete XML data. Sleeping for 30 seconds...
Received incomplete XML data. Sleeping for 45 seconds...
Received incomplete XML data. Sleeping for 60 seconds...
Received incomplete XML data. Sleeping for 75 seconds...
Received incomplete XML data. Sleeping for 135 seconds...
Received incomplete XML data. Sleeping for 195 seconds...
Received incomplete XML data. Sleeping for 255 seconds...
Received incomplete XML data. Sleeping for 315 seconds...
and so on ...
python pywikipedia/version.py
Pywikipedia (r7830 (wikipedia.py), 2009/12/27, 14:20:21)
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)]
----------------------------------------------------------------------
>Comment By: xqt (xqt)
Date: 2009-12-29 19:35
Message:
I guess this is a problem of the page 'Historyja literatury angielskiej'.
You may try the statement below but with
page=wikipedia.Page(site, 'Historyja literatury angielskiej')
It won't work. But I've no idea for now.
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-29 15:51
Message:
Yes, it works:
[mst@pl37007 pywikipedia]$ python
Python 2.6.2 (r262:71600, Aug 21 2009, 12:22:21)
[GCC 4.4.1 20090818 (Red Hat 4.4.1-6)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import wikipedia
>>> site = wikipedia.getSite('pl','wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T.')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
'/page>\n</mediawiki>\n'
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 09:16
Message:
Could you test the following statements in your python shell (idle):
>>> import wikipedia
>>> site = wikipedia.getSite('pl', 'wikisource')
>>> ga = wikipedia._GetAll(site, pages=[], throttle=0, force=True)
>>> page=wikipedia.Page(site, 'H. K. T')
>>> ga.pages = [page]
>>> data=ga.getData()
>>> data[-20:]
As result you should see this:
'einfo>\n</mediawiki>\n'
>>>
----------------------------------------------------------------------
Comment By: masti (masti01)
Date: 2009-12-28 18:50
Message:
one thing was wrong: it's pl.wikisource not wikibooks. This error persist
since some months already, and it happens everytime I run a bot. So this
should not be due to server performance. I was runnig a bot on several
other projects as well today and this happens only on pl.wikisource. This
is just an example but the bot stalls on several different articles.
I am using API.
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-28 18:38
Message:
I found the server was very slow today for pl sites. I had a similar delay
on pl-wiki. Maybe it works late.
On the other hand you can try via the API. Does this work?
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922193&group_…
Bugs item #2922896, was opened at 2009-12-29 17:25
Message generated for change (Comment added) made by wikimercy
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922896&group_…
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: General
Group: None
>Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Mercy (wikimercy)
Assigned to: Nobody/Anonymous (nobody)
Summary: Bot corrupting characters on eo.wp
Initial Comment:
See http://en.wikipedia.org/w/index.php?title=User_talk:Mercy&diff=334694634&ol… for more information.
An example of a wrong edit can be found here: http://eo.wikipedia.org/w/index.php?title=Maigret&diff=2662069&oldid=2500605
The whole situation happened when I used the feature.py script to add Link_FA templates to the articles.
I'm using Python 2.6.3.
----------------------------------------------------------------------
>Comment By: Mercy (wikimercy)
Date: 2009-12-29 19:15
Message:
Hi, my current SVN revision is nr. 7844 and I'm using the API.
----------------------------------------------------------------------
Comment By: xqt (xqt)
Date: 2009-12-29 19:03
Message:
I need further informations:
- the actual release of pywikipediabot you are using
- is use_api=True in your user-config.py
----------------------------------------------------------------------
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=603138&aid=2922896&group_…