Hello! Is there a bot or extension that can generate stubs of template
descriptions by template wikitext? It should be pretty simple: the bot will
just grab all template parameters {{{PARAMETER}}} and lists those
parameters.
If no such extensions exists could anybody tell how to programaticaly get
the parameters of a given template?
Sincerely yours,
-----
Yury Katkov
I just wanted to put() a simple page on a MediaWiki 1.16
instance, where I have to use screen scraping (use_api=False).
There is something strange however:
There is an API call invoked by _getBlocked:
/w/api.php?action=query&format=json&meta=userinfo&uiprop=blockinfo
Here's my backtrace:
File "pywikipedia/wikipedia.py", line 693, in get
expandtemplates = expandtemplates)
File "pywikipedia/wikipedia.py", line 743, in _getEditPage
return self._getEditPageOld(get_redirect, throttle, sysop, oldid, change_edit_time)
File "pywikipedia/wikipedia.py", line 854, in _getEditPageOld
text = self.site().getUrl(path, sysop = sysop)
File "pywikipedia/wikipedia.py", line 5881, in getUrl
self._getUserDataOld(text, sysop = sysop)
File "pywikipedia/wikipedia.py", line 6016, in _getUserDataOld
blocked = self._getBlock(sysop = sysop)
File "pywikipedia/wikipedia.py", line 5424, in _getBlock
data = query.GetData(params, self)
File "pywikipedia/query.py", line 146, in GetData
jsontext = site.getUrl( path, retry=True, sysop=sysop, data=data)
getUrl(), which is also called from API, seems always
to call _getUserDataOld(text) where text is ... API output
so it tries to do strange things on that and gives warnings
like
Note: this language does not allow global bots.
WARNING: Token not found on wikipedia:pl. You will not be able to edit any page.
which is nonsense since the analyzed text is not HTML - only API output.
If getUrl() is supposed to be a low-level call, why call _getUserDataOld()
there?
http://www.mediawiki.org/wiki/Special:Code/pywikipedia/7461
has introduced this call there.
It's easily reproducable by this:
import wikipedia
import config
config.use_api = False
wikipedia.verbose = True
s = wikipedia.getSite("pl", "wikipedia")
p = wikipedia.Page(s, u"User:Saper")
c = p.get()
c += "<!-- test -->"
p.put(c, u"Testing wiki", botflag=False)
//Saper
Having problems with pageimport.py on a third-party wiki (WikiQueer). Anyone else having issues with that script?
I'm calling it from a script I'm playing around with - but no luck. It doesn't error out - but it doesn't import and confirms that the import failed.
Here's the "test" script I'm working from:
import wikipedia as pywikibot
from pageimport import *
def main():
wanted_category_title = "Apple"
enwiki_site = pywikibot.getSite()
importerbot = Importer(enwiki_site) # Inizializing
importerbot.Import(wanted_category_title,project='wikipedia', prompt = True)
try:
main()
finally:
pywikibot.stopme()
On a related note, the ultimate goal is to import pages for "Wanted Categories" from English Wikipedia into the third-party wiki. Any ideas, tips or existing code to that end would also be appreciated.
Thanks!
-greg aka varnent
-------
Gregory Varnum
Lead, Aequalitas Project
Lead Administrator, WikiQueer
Founding Principal, VarnEnt
@GregVarnum
fb.com/GregVarnum
Dear all
I have posted to this mailing list in January with a library that I
wanted to contribute to the codebase. This is part of an effort on my
side to refactor code that accumulated over various bot-operator tasks
and make it available to the community. The main part of the code
deals with spellchecking using hunspell
(http://hunspell.sourceforge.net/) instead of the list-based approach
currently used in spellcheck.py. The second part is an interactive
robot to do revision control (Sichten) in the german wikipedia. There
are some api functions that use the "undo" functions of the
action=edit command and an api function that uses the action=review
command.
So I wanted to ask whether somebody had time to have a look at the
code I submitted here
http://sourceforge.net/tracker/?func=detail&aid=3479070&group_id=93107&atid…
(I uploaded a new file "(moved testSamples)" please us this to test,
the other one seems corrupt and cannot be deleted any more as well).
Thus, is there a code-review process that I can undergo or what do you
suggest is the best way to get the code into trunk (if at all?). Would
it be easier if I talked directly to one of you?
What are the criteria to get SVN commit access -- I was just wondering
what the general rules are.
Greetings
Hannes
Hi everyone
I added wikidata edits on PWB. for now it's basic and you can only change
or add or remove labels but i have plan to imporve it.
You can add wikidata family to your user-config.py and run a simillar code:
import wikipedia
site=wikipedia.getSite('wikidata',fam='wikidata')
page=wikipedia.Page(site,"Q321")
page.put(u"",u"Bot:
testing",wikidata=True,labelwikidata=u"no",valuewikidata=u"Test FOO")
I ran that and it
worked<http://wikidata-test-repo.wikimedia.de/w/index.php?title=Q321&curid=9133&di…>but
if there is any bugs feel free to tell me
Cheers
--
Amir
I noticed that the Category class has a members() method that calls
site.categorymembers() but doesn't expose all the options of the latter
(e.g. sorting by timestamp).
Thought I could see if I could spend a bit of time to write a patch that'll
do just that, but maybe there's a reason why I shouldn't? Feel free to let
me know.
Cheers,
Morten
Hi folks,
my command was: replace.py -search:korai -regex "(?i)korai éve[ki]"
ifjúkorain huwiki.
It got 60 pages three times, but during the 3rd package it stopped with
Traceback (most recent call last):
File "C:\Pywikipedia\pagegenerators.py", line 1211, in __iter__
for page in self.wrapped_gen:
File "C:\Pywikipedia\pagegenerators.py", line 1088, in
DuplicateFilterPageGenerator
for page in generator:
File "C:\Pywikipedia\pagegenerators.py", line 808, in SearchPageGenerator
for page in site.search(query, number=number, namespaces = namespaces):
File "C:\Pywikipedia\wikipedia.py", line 6524, in search
raise NotImplementedError('%s' % data['error']['info'])
NotImplementedError: text search is disabled
text search is disabled
Why did this happen to me? :-(
--
Bináris
Hi,
I would like to join in Python Wikipedia Robot Framework project in
Sourceforge.
Some of my contribution:
https://sourceforge.net/tracker/?func=detail&atid=603140&aid=3509841&group_…
.
Currently I have been doing Picasa batch upload automated script.
I kindly request to add me in this project
Regards,
Jenith
Hi all,
I noticed that SuggestBot's struggled with saving a user page earlier
this week, see http://en.wikipedia.org/w/index.php?title=User_talk:The_Master_of_Mayhem&ac…
Notice the larger number of saves that have a diff size of 0 bytes.
I suspect it's due to the size of the page, 300+kB, is this a typical
problem? If not I can start digging to see if I can figure out what's
going on. If it is a common problem, what are some typical ways of
solving it? Just checking the page size and skip/abort if it's too
large?
Regards,
Morten
Scott,
Nemo is referring to the dumpgenerator.py being broken on MediaWiki
versions above 1.20, and it should not actually affect older MediaWiki
versions.
You can safely continue with your grab. :)
On Sat, Nov 10, 2012 at 12:45 PM, Scott Boyd <scottdb56(a)gmail.com> wrote:
> At this link: https://code.google.com/p/wikiteam/issues/detail?id=56 , at
> the bottom, there is an entry by project member nemowiki that states:
>
> Comment 7 <https://code.google.com/p/wikiteam/issues/detail?id=56#c7>by project member
> nemowiki <https://code.google.com/u/101255742639286016490/>, Today (9
> hours ago)
>
> Fixed by emijrp in r806 <https://code.google.com/p/wikiteam/source/detail?r=806>. :-)
>
> *Status:* Fixed
>
> So does that mean this problem that "It's completely broken" is now fixed?
> I'm running a huge download of 64K+ page titles, and am now using the
> "r806" version of dumpgenerator.py. The first 35K+ page titles were
> downloaded with an older version). Both versions sure seem to be
> downloading MORE than 500 pages per namespace, but I'm not sure, since I
> don't know how you can tell if you are getting them all...
>
> So is it fixed or not?
>
>
> On Fri, Nov 9, 2012 at 4:27 AM, Federico Leva (Nemo) <nemowiki(a)gmail.com>wrote:
>
>> It's completely broken: https://code.google.com/p/**
>> wikiteam/issues/detail?id=56<https://code.google.com/p/wikiteam/issues/detail?id=56>
>> It will download only a fraction of the wiki, 500 pages at most per
>> namespace.
>>
>>
>>
>
>
--
Regards,
Hydriz
We've created the greatest collection of shared knowledge in history. Help
protect Wikipedia. Donate now: http://donate.wikimedia.org