Happy Monday,
There are strange people who make such links (kindof urlencoded?):
[[Második világháború#Partrasz.C3.A1ll.C3.A1s Szic.C3.ADli.C3.A1ban
.28Huskey hadm.C5.B1velet.29|Huskey hadműveletben]]
So the section title must have been copied from the URL.
Do we have a ready tool to fix these?
--
Bináris
Hello all
>From one of my assignments as a bot operator I have some code which
does template parsing and general text parsing (e.g. Image/File tags).
It is not using regex and thus able to correctly parse nested
templates and other such nasty things. I have written those as library
classes and written tests for them which cover almost all of the code.
I would now really like to contribute that code back to the community.
Would you be interested in adding this code to the pywikibot
framework? If yes, can I send the code to someone for code review or
how do you usually operate?
Greetings
Hannes
PS: wiki userpage is http://en.wikipedia.org/wiki/User:Hannes_R%C3%B6st
With Gerrit change 159073 [1] Jenkins can now verify the PEP 257
compatibility of certain files. This prevents that later changes
accidentally break the compatibility of those files. The voting tests
have been enabled a few minutes ago.
If another file was made compatible it should be added to the tox.ini
so it will be monitored similar to the Gerrit change [2].
Fabian Neundorf
[1]: https://gerrit.wikimedia.org/r/159073/
[2]: https://gerrit.wikimedia.org/r/159223/
Hi
I am aspiring to particiapate for opw round 9.in pywikibot.I am proficient
in python and django,kindly suggest me some simple bugs to fix
Regards
Tessy Joseph John
First you should test it with an unaccented pattern. If it works with
English letters, there may be a problem with your character encoding.
2014-09-09 16:42 GMT+02:00 <bugzilla-daemon(a)wikimedia.org>:
> https://bugzilla.wikimedia.org/show_bug.cgi?id=70607
>
> Bug ID: 70607
> Summary: replace.py does not work
> Product: Pywikibot
> Version: core (2.0)
> Hardware: All
> OS: All
> Status: NEW
> Severity: normal
> Priority: Unprioritized
> Component: Other scripts
> Assignee: Pywikipedia-bugs(a)lists.wikimedia.org
> Reporter: jan.dudik(a)gmail.com
> Web browser: ---
> Mobile Platform: ---
>
> In compat:
> replace.py -regex -nocase -file:aa.log "==\s*Externí
> odkazy(.*?)\r\n\{\{Commonscat" "== Externí odkazy\1\n* {{Commonscat"
> -summary:"řádková verze {{Commonscat}}"
>
> Getting 60 pages from wikipedia:cs...
> ...
> No changes were necessary in [[Roman Polák (lední hokejista)]]
>
>
> >>> Roman Polanski <<<
> - {{Commonscat|Roman Polanski}}
> + * {{Commonscat|Roman Polanski}}
>
>
> In core, the same command:
> pwb.py replace -regex -nocase -file:aa.log "==\s*Externí odkazy(.
> *?)\r\n\{\{Commonscat" "== Externí odkazy\1\n* {{Commonscat"
> -summary:"řádková
> verze {{Commonscat}}"
>
> Retrieving 50 pages from wikipedia:cs.
> ...
> No changes were necessary in [[Roman Polanski]]
> No changes were necessary in [[Roman Polák (lední hokejista)]]
> No changes were necessary in [[Roman Romaněnko]]
>
>
> Why?
>
> --
> You are receiving this mail because:
> You are the assignee for the bug.
> _______________________________________________
> Pywikipedia-bugs mailing list
> Pywikipedia-bugs(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/pywikipedia-bugs
>
--
Bináris
For English Wikisource, I have a series of 63 volumes of the "Dictionary of
National Biography" that have now been transcribed. These works have many
internal cross reference links, which we have been generating, however, in
the 16 years that the volumes were originally constructed, some of the
articles that were planned to be written were not, so they have ended up
being red links. I wish to identify these red links
I am trying work out how to use pywikibot to generate a list of the pages
with red links. To get a list of pages to feed through is easy, as each
page is linked to an index page per volume, so it would be a namespace
collection from a central page, all based around {{PAGENAME}}.
Thanks for any help provided.
Regards, Billinghurst
Hi folks,
since the badges are moved to Wikidata placing Link FA and Link GA won't be needed anymore. Therefore featured.py could be removed from framework scope and it could be placed into an archive folder (we had in past on svn). I don't know where is the right place for it. First I thought pywikibot/bots/misc could be the right one, unfortunately we have compat and core variant.
Any suggestions?
Best
xqt
I propose this change:
- set `page.text` to `None` to discard changes
- delete `page.text` to reload content -- equivalent to get(force=True)
- in Page.save(), if `page.text` is equal to unchanged text, no API call
- to touch a page, use page.touch(), which will call action="edit" &
appendtext="", instead. The advantage is that there is no need to preload
text.
Sorawee Porncharoenwase
It has been reported
<https://it.wikipedia.org/wiki/Speciale:Diff/67448044> to me that core's
welcome.py doesn't stop with the '-break' option enabled.
Unfortunately, I have no past experience with it. Does it work for you?