There are strange people who make such links (kindof urlencoded?):
[[Második világháború#Partrasz.C3.A1ll.C3.A1s Szic.C3.ADli.C3.A1ban
.28Huskey hadm.C5.B1velet.29|Huskey hadműveletben]]
So the section title must have been copied from the URL.
Do we have a ready tool to fix these?
>From one of my assignments as a bot operator I have some code which
does template parsing and general text parsing (e.g. Image/File tags).
It is not using regex and thus able to correctly parse nested
templates and other such nasty things. I have written those as library
classes and written tests for them which cover almost all of the code.
I would now really like to contribute that code back to the community.
Would you be interested in adding this code to the pywikibot
framework? If yes, can I send the code to someone for code review or
how do you usually operate?
PS: wiki userpage is http://en.wikipedia.org/wiki/User:Hannes_R%C3%B6st
I wanted to participate in GSoC 2016 with wikimedia and found a possible
project about moving catimages.py from pywikibot-contrib to pywikibot-core.
You can find the issue on phabricator at
I spoke to the author of catimages (@DrTrigon) and he mentioned that he was
willing to be a mentor for such a project He also raised concern as to
whether the community actually wanted this. As the script is quite dated,
he mentioned that possibly some things in server side commons need to be
changed to work correctly.
I wanted to check if there was still demand for this transition or should
the issue be closed ? Is this bot still of use to commons ?
I have created a patch for the site.search function as it often
returned too many results which did not include the word I searched
for. This patch should solve the problem on the bot side as I couldnt
quite figure out whether there is a solution on the API side. Also, I
didnt quite know where to post the patches, so I made a github pull
I hope you could have a look at the patches and maybe direct me to a
better place to post these if you are interested in them.
There are 305 open change sets in pywikibot/core which make us the second
project after mediawiki/core in number of open change sets. Open change
sets won't do any harm but sometimes it's hard to find good patches in pool
of out-of-date-or-not-needed-anymore patches.
Some projects have rules like "if a change set has -2 for more than N
weeks, it can be abandoned" it can work for us. Also I think if we look
into our own old patches we might find several patches that are not needed
anymore due to several reasons such as being implemented another way, too
big schema change in pywikibot, etc. We can abandon them. Please check your
I was just thinking about renaming articles in the way we change text by
I use replace.py for correcting spelling errors. They may appear as well in
titles. Is there a way to correct misspelled titles using the flexibility
and robustness of replace.py and fixes.py?
I can use them to search such articles without renaming, if I replace space
to anything and write the regex to 'require-title' exception list, but it
only allows one regex (being the listed expressions in AND relation), which
is not really comfortable.