Happy Monday,
There are strange people who make such links (kindof urlencoded?):
[[Második világháború#Partrasz.C3.A1ll.C3.A1s Szic.C3.ADli.C3.A1ban
.28Huskey hadm.C5.B1velet.29|Huskey hadműveletben]]
So the section title must have been copied from the URL.
Do we have a ready tool to fix these?
--
Bináris
Hello all
>From one of my assignments as a bot operator I have some code which
does template parsing and general text parsing (e.g. Image/File tags).
It is not using regex and thus able to correctly parse nested
templates and other such nasty things. I have written those as library
classes and written tests for them which cover almost all of the code.
I would now really like to contribute that code back to the community.
Would you be interested in adding this code to the pywikibot
framework? If yes, can I send the code to someone for code review or
how do you usually operate?
Greetings
Hannes
PS: wiki userpage is http://en.wikipedia.org/wiki/User:Hannes_R%C3%B6st
Hello! Is there a bot or extension that can generate stubs of template
descriptions by template wikitext? It should be pretty simple: the bot will
just grab all template parameters {{{PARAMETER}}} and lists those
parameters.
If no such extensions exists could anybody tell how to programaticaly get
the parameters of a given template?
Sincerely yours,
-----
Yury Katkov
I just wanted to put() a simple page on a MediaWiki 1.16
instance, where I have to use screen scraping (use_api=False).
There is something strange however:
There is an API call invoked by _getBlocked:
/w/api.php?action=query&format=json&meta=userinfo&uiprop=blockinfo
Here's my backtrace:
File "pywikipedia/wikipedia.py", line 693, in get
expandtemplates = expandtemplates)
File "pywikipedia/wikipedia.py", line 743, in _getEditPage
return self._getEditPageOld(get_redirect, throttle, sysop, oldid, change_edit_time)
File "pywikipedia/wikipedia.py", line 854, in _getEditPageOld
text = self.site().getUrl(path, sysop = sysop)
File "pywikipedia/wikipedia.py", line 5881, in getUrl
self._getUserDataOld(text, sysop = sysop)
File "pywikipedia/wikipedia.py", line 6016, in _getUserDataOld
blocked = self._getBlock(sysop = sysop)
File "pywikipedia/wikipedia.py", line 5424, in _getBlock
data = query.GetData(params, self)
File "pywikipedia/query.py", line 146, in GetData
jsontext = site.getUrl( path, retry=True, sysop=sysop, data=data)
getUrl(), which is also called from API, seems always
to call _getUserDataOld(text) where text is ... API output
so it tries to do strange things on that and gives warnings
like
Note: this language does not allow global bots.
WARNING: Token not found on wikipedia:pl. You will not be able to edit any page.
which is nonsense since the analyzed text is not HTML - only API output.
If getUrl() is supposed to be a low-level call, why call _getUserDataOld()
there?
http://www.mediawiki.org/wiki/Special:Code/pywikipedia/7461
has introduced this call there.
It's easily reproducable by this:
import wikipedia
import config
config.use_api = False
wikipedia.verbose = True
s = wikipedia.getSite("pl", "wikipedia")
p = wikipedia.Page(s, u"User:Saper")
c = p.get()
c += "<!-- test -->"
p.put(c, u"Testing wiki", botflag=False)
//Saper
Having problems with pageimport.py on a third-party wiki (WikiQueer). Anyone else having issues with that script?
I'm calling it from a script I'm playing around with - but no luck. It doesn't error out - but it doesn't import and confirms that the import failed.
Here's the "test" script I'm working from:
import wikipedia as pywikibot
from pageimport import *
def main():
wanted_category_title = "Apple"
enwiki_site = pywikibot.getSite()
importerbot = Importer(enwiki_site) # Inizializing
importerbot.Import(wanted_category_title,project='wikipedia', prompt = True)
try:
main()
finally:
pywikibot.stopme()
On a related note, the ultimate goal is to import pages for "Wanted Categories" from English Wikipedia into the third-party wiki. Any ideas, tips or existing code to that end would also be appreciated.
Thanks!
-greg aka varnent
-------
Gregory Varnum
Lead, Aequalitas Project
Lead Administrator, WikiQueer
Founding Principal, VarnEnt
@GregVarnum
fb.com/GregVarnum
Dear all
I have posted to this mailing list in January with a library that I
wanted to contribute to the codebase. This is part of an effort on my
side to refactor code that accumulated over various bot-operator tasks
and make it available to the community. The main part of the code
deals with spellchecking using hunspell
(http://hunspell.sourceforge.net/) instead of the list-based approach
currently used in spellcheck.py. The second part is an interactive
robot to do revision control (Sichten) in the german wikipedia. There
are some api functions that use the "undo" functions of the
action=edit command and an api function that uses the action=review
command.
So I wanted to ask whether somebody had time to have a look at the
code I submitted here
http://sourceforge.net/tracker/?func=detail&aid=3479070&group_id=93107&atid…
(I uploaded a new file "(moved testSamples)" please us this to test,
the other one seems corrupt and cannot be deleted any more as well).
Thus, is there a code-review process that I can undergo or what do you
suggest is the best way to get the code into trunk (if at all?). Would
it be easier if I talked directly to one of you?
What are the criteria to get SVN commit access -- I was just wondering
what the general rules are.
Greetings
Hannes
Hi everyone
I added wikidata edits on PWB. for now it's basic and you can only change
or add or remove labels but i have plan to imporve it.
You can add wikidata family to your user-config.py and run a simillar code:
import wikipedia
site=wikipedia.getSite('wikidata',fam='wikidata')
page=wikipedia.Page(site,"Q321")
page.put(u"",u"Bot:
testing",wikidata=True,labelwikidata=u"no",valuewikidata=u"Test FOO")
I ran that and it
worked<http://wikidata-test-repo.wikimedia.de/w/index.php?title=Q321&curid=9133&di…>but
if there is any bugs feel free to tell me
Cheers
--
Amir
Hi,
I would like to join in Python Wikipedia Robot Framework project in
Sourceforge.
Some of my contribution:
https://sourceforge.net/tracker/?func=detail&atid=603140&aid=3509841&group_…
.
Currently I have been doing Picasa batch upload automated script.
I kindly request to add me in this project
Regards,
Jenith
Hi all,
I noticed that SuggestBot's struggled with saving a user page earlier
this week, see http://en.wikipedia.org/w/index.php?title=User_talk:The_Master_of_Mayhem&ac…
Notice the larger number of saves that have a diff size of 0 bytes.
I suspect it's due to the size of the page, 300+kB, is this a typical
problem? If not I can start digging to see if I can figure out what's
going on. If it is a common problem, what are some typical ways of
solving it? Just checking the page size and skip/abort if it's too
large?
Regards,
Morten
Hello,
For manual running it may work, but for automated running such as MiszaBot... will it work? - At Meta, the archiving system is automated and I'm not the owner of MiszaBot :-)
Regards, M.
----- Mensaje original -----
De: info(a)gno.de
Enviado: 28-10-12 11:44
Para: pywikipedia-l(a)lists.wikimedia.org
Asunto: Re: [Pywikipedia-l] archivebot.py issues
update the bot to release r10622 or higher and try again with option --page="m::Wikimedia Forum" double colon or preleading colon now implies main namespace as exprected and was intended for page titles long time ago Regards xqt ----- Original Nachricht ---- Von: legoktm <legoktm.wikipedia(a)gmail.com> An: Pywikipedia discussion list <pywikipedia-l(a)lists.wikimedia.org> Datum: 28.10.2012 03:10 Betreff: Re: [Pywikipedia-l] archivebot.py issues > If you modify line 293 and remove the "defaultNamespace=3" it should work. > Looks like that was introduced in > pyrev:10149<https://www.mediawiki.org/wiki/Special:Code/pywikipedia/10149>. > Not sure if it was intentional or not. > -- Legoktm > > > > On Sat, Oct 27, 2012 at 3:19 PM, Marco Aurelio <maurelio(a)gmx.es> wrote: > > > Hi, > > > > At Meta-Wiki we've > detected<https://meta.wikimedia.org/w/index.php?title=Meta:Babel&oldid=43415 > 73#MiszaBot_has_stopped_archiving_main_ns_pages>that the bot that used to > archive pages (MiszaBot) has suddenly stopped > > from doing so on main namespace pages. I did tryed with my bot using the > > archivebot.py script on a backlogged page, being [[m:Wikimedia Forum]] > and > > the result is: > > > > *Processing [[meta:Wikimedia Forum]] > > Looking for: {{User:MiszaBot/config}} in [[meta:User talk:Wikimedia > Forum]] > > Error occured while processing page [[meta:Wikimedia Forum]] > > Traceback (most recent call last): > > File "C:\Python\pywikibot\archivebot.py", line 601, in main > > Archiver = PageArchiver(pg, a, salt, force) > > File "C:\Python\pywikibot\archivebot.py", line 376, in __init__ > > self.loadConfig() > > File "C:\Python\pywikibot\archivebot.py", line 418, in loadConfig > > raise MissingConfigError(u'Missing or malformed template') > > MissingConfigError: Missing or malformed template > > > > C:\Python\pywikibot>* > > > > Something seems to have changed in the script's code, that now just looks > > for Talk and User talk: namespaces. > > > > Can the script please be fixed so it works on all namespaces again? - A > > fix would be deeply appreciated. > > > > Best regards, M. > > > > -- > > - Marco Aurelio > > _______________________________________________ > > Pywikipedia-l mailing list > > Pywikipedia-l(a)lists.wikimedia.org > > https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l > > > > > > > -------------------------------- > > _______________________________________________ > Pywikipedia-l mailing list > Pywikipedia-l(a)lists.wikimedia.org > https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l > _______________________________________________ Pywikipedia-l mailing list Pywikipedia-l(a)lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
--
- Marco Aurelio