There are strange people who make such links (kindof urlencoded?):
[[Második világháború#Partrasz.C3.A1ll.C3.A1s Szic.C3.ADli.C3.A1ban
.28Huskey hadm.C5.B1velet.29|Huskey hadműveletben]]
So the section title must have been copied from the URL.
Do we have a ready tool to fix these?
My old problem is that repalce.py can't write the pages to work on into a
file on my disk. I have used a modificated version for years that does no
changes but writes the title of the involved pages to a subpage on Wikipedia
in automated mode, and then I can make the replacements from that page much
more quickly than directly from dump or living Wikipedia. This is slow and
generates a plenty of dummy edits.
In other words, replace.py has a tool to get the titles from a file (-file)
or from a wikipage (-links), but has no tool to generate this file.
Now I am ready to rewrite it. This way we can start it and the bot will find
all the possible articles to work on and save the titles without editing
Wikipedia (and without artificial delay), meanwhile we can have the lunch or
run a marathon or sleep. Then we make the replacements from this with -file.
My idea is that replace.py should have two new parameters:
-save writes the results into a new file instead of editing articles. It
overwrites existing file without notice.
-saveappend writes into a file or appends to the existing one.
-save writes and appends (primary mode)
-savenew writes and overwrites
The help is here:
So we have to import codecs.
My script is:
tutuzuzu=u'# %s\n' %page.aslink() <-- needs rewrite to the new syntax
articles.write(unicode(tutuzuzu)) <-- needs further testing, if nicode() is
It works fine except '\n' is a unix-styled newline that has to be converted
by lfcr.py in order to make it readable with notepad.exe.
This is with constant filename, that should be developed to get from command
Your opinions before I begin?
I want to read a special page with Page.get(). The message is:
File "C:\Program Files\Pywikipedia\wikipedia.py", line 601, in get
raise NoPage('%s is in the Special namespace!' % self.aslink())
What is the solution?
I would like to get an SVN access and some help to start.
I need it mainly for inserting and maintaining TOCbot that is under
preparation (it has worked in huwiki for several months and is now being
Information about TOCbot:
Description, user guide and bot owners' guide and a collection of examples
is ready as well as an auxilary script, while the main script is not yet
public. It will soon be published for test and may need much care in the
I would also like to take part in maintenance of replace.py for what I
worked a lot already.
At the moment I am interested only in trunk version.
My SF page: http://sourceforge.net/users/binbot/ -- I don't know how to list
all my contributions, here appears a part of them since May 22, but there
are much more. I have also been active on mailing list in the past years.
Please support and give me technical help to use the system.
On Mon, Apr 25, 2011 at 7:49 AM, Merlijn van Deen <valhallasw(a)arctus.nl> wrote:
> Whoo! Great work :-) Tests always are good contributions :-)
> On a sidenote - is there a reason you're implementing these in 'trunk' and
> not in 'rewrite'? Of course, these contributions are very welcome in the
> trunk, but I still think it would be good to push the rewrite branch.
I'm working off trunk because it is trunk.
I'd assumed that the rewrite branch was a single-purpose branch to
rewrite something, and that it would be merged back when it is stable.
Is it stable?
Is there any documentation on what the plans are for the rewrite branch?
Is there a roadmap to finish it?
Is see now that the rewrite branch has more unit tests, but more are needed.
Is there a need to create a backwards compatibility layer?
Or, is everyone except me using the rewrite branch? ;-)
*As several people have mentioned they had trouble starting with the rewrite
branch, I decided to do a step-by-step log of installing the rewrite in a
way that is good for developing -- this means you are able to edit the
framework files, while not inflicting any changes on other users (or other
bots you run!) of the system. By using setup.py develop, edits you make to
the framework will immediately be used (no need to setup.py install them),
but only within the virtualenv.*
*This is the windows version of my earlier email*
I do not run python on windows, so this is a tutorial that starts with
installing python. It's a bit rougher than the unix one, as I did not want
to spend too much time on it.
1. Install python 2.7
*not* use the 64-bit version, due to http://bugs.python.org/issue6792 )
2. Install Setuptools
3. Install Virtualenv
4. create a virtualenv for pwb
New python executable in pywikibot\Scripts\python.exe
5. Go to C:\Users\valhallasw\pywikibot and use tortoisesvn to get the
6. create a shortcut to cmd /k
with working path C:\Users\valhallasw\pywikibot\rewrite
7. Use the shortcut. You now have a new cmd.exe window
8. python setup.py develop
Your default user directory is
How to proceed? ([K]eep [c]hange)
change, to c:\users\valhallasw\pywikibot\conf\
Answer 'y' to the warning prompt (not 'yes')
Do you want to copy files: y
[note: I copied my unix user-config.py to c:\users\valhallasw\pywikibot]
Path to existing wikipedia.py? C:\Users\valhallasw\pywikibot
NOTE: user-config.py already exists in the directory
Create user-fixes.py file? ([y]es, [N]o) n
(pywikibot) C:\Users\valhallasw\pywikibot\rewrite>echo SET
(DON'T put a space between f and >>!)
Close the window, and
9. Use the shortcut from (7) again
You should now have a cmd.exe with a working pywikibot setup!
(pywikibot) C:\Users\valhallasw\pywikibot\rewrite\scripts>python touch.py
Retrieving 1 pages from wikipedia:nl.
Page [[Gebruiker:Valhallasw]] saved
NOTE: you *must* use 'python' in front of the script name, or python will
not find the pywikibot directory.
Hi Ariel and Andre,
On Fri, Sep 30, 2011 at 9:39 AM, Ariel T. Glenn <ariel(a)wikimedia.org>wrote:
> Out of curiosity... If the new revisions of one of these badly edited
> pages are deleted, leaving the top revision as the one just before the
> bad iw bot edit, does a rerun of the bot on the page fail?
On Fri, Sep 30, 2011 at 11:13 AM, Andre Engels <andreengels(a)gmail.com> wrote:
> I deleted the page [[nl:Blankenbach]], then restored the 2 versions before
> the problematic bot edit. When now I look at the page, instead of the page
> content I get:
Using this undeleted version, and running interwiki.py, gives the
valhallasw@dorthonion:~/src/pywikipedia/trunk$ python interwiki.py
NOTE: Number of pages queued is 0, trying to add 60 more.
Getting 1 pages from wikipedia:nl...
WARNING: Family file wikipedia contains version number 1.17wmf1, but
it should be 1.18wmf1
NOTE: [[nl:Blankenbach]] does not exist. Skipping.
This also happens for running it from dewiki (python interwiki.py
-lang:de -page:Blankenbach%20%28Begriffskl%C3%A4rung%29) or running as
'full-auto' bot (python interwiki.py -all -async -cleanup -log -auto
Special:Export acts like the page just does not exist
shows page Blanzac but not Blankenbach)
api.php also more or less does the expected thing:
- that is, unless you supply rvlimit=1:
However, none of them seem to return an empty page - and playing
around with pywikipediabot does not allow be to get an empty page
(depending on settings, it can either be the result on the edit page
(page.get(), use_api=False / screen scraping), a
pywikibot.exceptions.NoPage exception (PreloadingGenerator /
wikipedia.getall, which uses Special:Export) or the correct page text
Anyway, thanks a huge heap for trying this (and for everyone, for
thinking about it). Unfortunately, I won't have much time this weekend
to debug -- hopefully some other pwb developer has.
Best regards, and thanks again,
On 30 September 2011 11:12, Max Semenik <maxsem.wiki(a)gmail.com> wrote:
> So you screen-scrape? No surprise it breaks. Why? For example, due to
> protocol-relative URLs. Or some other changes to HTML output. Why not just
> use API?
No, most of pywikipedia has been adapted to the api and/or
special:export, which, imo, is just an 'old' mediawiki api. Keep in
mind interwiki.py is old (2003!), and pywikipedia initally was an
extension of the interwiki bot. Thus, there could very well be some
code that is seldom used which still uses screen scraping. And
actually, in practice, screen scraping worked pretty well.