There are strange people who make such links (kindof urlencoded?):
[[Második világháború#Partrasz.C3.A1ll.C3.A1s Szic.C3.ADli.C3.A1ban
.28Huskey hadm.C5.B1velet.29|Huskey hadműveletben]]
So the section title must have been copied from the URL.
Do we have a ready tool to fix these?
My old problem is that repalce.py can't write the pages to work on into a
file on my disk. I have used a modificated version for years that does no
changes but writes the title of the involved pages to a subpage on Wikipedia
in automated mode, and then I can make the replacements from that page much
more quickly than directly from dump or living Wikipedia. This is slow and
generates a plenty of dummy edits.
In other words, replace.py has a tool to get the titles from a file (-file)
or from a wikipage (-links), but has no tool to generate this file.
Now I am ready to rewrite it. This way we can start it and the bot will find
all the possible articles to work on and save the titles without editing
Wikipedia (and without artificial delay), meanwhile we can have the lunch or
run a marathon or sleep. Then we make the replacements from this with -file.
My idea is that replace.py should have two new parameters:
-save writes the results into a new file instead of editing articles. It
overwrites existing file without notice.
-saveappend writes into a file or appends to the existing one.
-save writes and appends (primary mode)
-savenew writes and overwrites
The help is here:
So we have to import codecs.
My script is:
tutuzuzu=u'# %s\n' %page.aslink() <-- needs rewrite to the new syntax
articles.write(unicode(tutuzuzu)) <-- needs further testing, if nicode() is
It works fine except '\n' is a unix-styled newline that has to be converted
by lfcr.py in order to make it readable with notepad.exe.
This is with constant filename, that should be developed to get from command
Your opinions before I begin?
I want to read a special page with Page.get(). The message is:
File "C:\Program Files\Pywikipedia\wikipedia.py", line 601, in get
raise NoPage('%s is in the Special namespace!' % self.aslink())
What is the solution?
I would like to get an SVN access and some help to start.
I need it mainly for inserting and maintaining TOCbot that is under
preparation (it has worked in huwiki for several months and is now being
Information about TOCbot:
Description, user guide and bot owners' guide and a collection of examples
is ready as well as an auxilary script, while the main script is not yet
public. It will soon be published for test and may need much care in the
I would also like to take part in maintenance of replace.py for what I
worked a lot already.
At the moment I am interested only in trunk version.
My SF page: http://sourceforge.net/users/binbot/ -- I don't know how to list
all my contributions, here appears a part of them since May 22, but there
are much more. I have also been active on mailing list in the past years.
Please support and give me technical help to use the system.
On Mon, Apr 25, 2011 at 7:49 AM, Merlijn van Deen <valhallasw(a)arctus.nl> wrote:
> Whoo! Great work :-) Tests always are good contributions :-)
> On a sidenote - is there a reason you're implementing these in 'trunk' and
> not in 'rewrite'? Of course, these contributions are very welcome in the
> trunk, but I still think it would be good to push the rewrite branch.
I'm working off trunk because it is trunk.
I'd assumed that the rewrite branch was a single-purpose branch to
rewrite something, and that it would be merged back when it is stable.
Is it stable?
Is there any documentation on what the plans are for the rewrite branch?
Is there a roadmap to finish it?
Is see now that the rewrite branch has more unit tests, but more are needed.
Is there a need to create a backwards compatibility layer?
Or, is everyone except me using the rewrite branch? ;-)
I'm running into a problem with article titles that include ":", e.g.
"Nomad: From Islam to America" (which exists on English Wikipedia:
http://en.wikipedia.org/wiki/Nomad:_From_Islam_to_America ). I haven't been
able to figure out if there's a way to instantiate pages like this one,
neither through Google nor reading source code and trial & error. All I end
up with is slamming into an exception because there's no "Nomad" family.
Is there a simple and straightforward way to do this?
I work on a script that will mass copy pages from one wiki to another.
The main purpose is to fetch my favourite templates from huwiki to my own
I have found nothing like that in the framework, is it new or I just have
not noticed it? In the first case I will publish it.