Hello! Is there a bot or extension that can generate stubs of template
descriptions by template wikitext? It should be pretty simple: the bot will
just grab all template parameters {{{PARAMETER}}} and lists those
parameters.
If no such extensions exists could anybody tell how to programaticaly get
the parameters of a given template?
Sincerely yours,
-----
Yury Katkov
I try to edit pages in a private wiki with pywikipedia. the wiki is password protected, so you cant even look at a page without logging in. While the login is working normal, I get an error that I dont have access to the api as soon as I try to actually do anything:
RuntimeError: {u'info': u'You need read permission to use this module', u'code': u'readapidenied'}
{u'info': u'You need read permission to use this module', u'code': u'readapidenied'}
The user account of the bot is active and the bot user group should have access to the api.
Does anybody have an idea how I can solve this problem?
Thank you very much!
Today, I put text to a page by my bot. pywikibot.showDiff() shows clearly
that text is removed from previous revision for 2 lines and code returns
from put() also indicate that putting succeeds
(302, 'OK', {u'pageid': ..., u'title': u'...', u'newtimestamp': u'...',
u'contentmodel': u'wikitext', u'result': u'Success', u'oldrevid': ...,
u'newrevid': ...})
However, actually the text isn't changed! History page shows me that the
text is added 1 byte (or sometimes o byte), but the content and its
previous version are really the same.
Did anyone meet this problem?
----
Sorawee Porncharoenwase
Hi folks,
Is there a tool to hunt for overmagnified pictures and restore them? I mean
when a JPG is originally 150 px and is rendered to 300 px in the article
which is a useless enlargement and results in an ugly picture.
Or do we have any part of it, e.g. determining the size of a picture?
I think JPG is the main subject, GIFs are perhaps less involved and SVG is
of course problemless.
--
Bináris
I just wanted to put() a simple page on a MediaWiki 1.16
instance, where I have to use screen scraping (use_api=False).
There is something strange however:
There is an API call invoked by _getBlocked:
/w/api.php?action=query&format=json&meta=userinfo&uiprop=blockinfo
Here's my backtrace:
File "pywikipedia/wikipedia.py", line 693, in get
expandtemplates = expandtemplates)
File "pywikipedia/wikipedia.py", line 743, in _getEditPage
return self._getEditPageOld(get_redirect, throttle, sysop, oldid, change_edit_time)
File "pywikipedia/wikipedia.py", line 854, in _getEditPageOld
text = self.site().getUrl(path, sysop = sysop)
File "pywikipedia/wikipedia.py", line 5881, in getUrl
self._getUserDataOld(text, sysop = sysop)
File "pywikipedia/wikipedia.py", line 6016, in _getUserDataOld
blocked = self._getBlock(sysop = sysop)
File "pywikipedia/wikipedia.py", line 5424, in _getBlock
data = query.GetData(params, self)
File "pywikipedia/query.py", line 146, in GetData
jsontext = site.getUrl( path, retry=True, sysop=sysop, data=data)
getUrl(), which is also called from API, seems always
to call _getUserDataOld(text) where text is ... API output
so it tries to do strange things on that and gives warnings
like
Note: this language does not allow global bots.
WARNING: Token not found on wikipedia:pl. You will not be able to edit any page.
which is nonsense since the analyzed text is not HTML - only API output.
If getUrl() is supposed to be a low-level call, why call _getUserDataOld()
there?
http://www.mediawiki.org/wiki/Special:Code/pywikipedia/7461
has introduced this call there.
It's easily reproducable by this:
import wikipedia
import config
config.use_api = False
wikipedia.verbose = True
s = wikipedia.getSite("pl", "wikipedia")
p = wikipedia.Page(s, u"User:Saper")
c = p.get()
c += "<!-- test -->"
p.put(c, u"Testing wiki", botflag=False)
//Saper
Having problems with pageimport.py on a third-party wiki (WikiQueer). Anyone else having issues with that script?
I'm calling it from a script I'm playing around with - but no luck. It doesn't error out - but it doesn't import and confirms that the import failed.
Here's the "test" script I'm working from:
import wikipedia as pywikibot
from pageimport import *
def main():
wanted_category_title = "Apple"
enwiki_site = pywikibot.getSite()
importerbot = Importer(enwiki_site) # Inizializing
importerbot.Import(wanted_category_title,project='wikipedia', prompt = True)
try:
main()
finally:
pywikibot.stopme()
On a related note, the ultimate goal is to import pages for "Wanted Categories" from English Wikipedia into the third-party wiki. Any ideas, tips or existing code to that end would also be appreciated.
Thanks!
-greg aka varnent
-------
Gregory Varnum
Lead, Aequalitas Project
Lead Administrator, WikiQueer
Founding Principal, VarnEnt
@GregVarnum
fb.com/GregVarnum
Hi folks,
is there a piece of code in pywiki that determines easily if the current
time zone is summer or winter time? (Last Sunday of March, 3 o'clock --
last Sunday of October, 2 o'clock)
--
Bináris
You should delete the API cache first.
xqt
----- Ursprüngliche Nachricht -----
Von: Morten Wang
Gesendet: 27.02.2013 16:39
An: Pywikipedia discussion list
Betreff: [Pywikipedia-l] Rewrite branch,issue with user pages in aliased namespace
Hi all,
I've run into an interesting issue on Portuguese Wikipedia, with a user page that's in the aliased user namespace:
import pywikibot;
site = pywikibot.getSite('pt');
page = pywikibot.Page(site, u"Usuário:Vitorvicentevalente");
page.title();
u'Usu\xe1rio(a):Vitorvicentevalente'
page.get()
[NOTE: callback trace removed for brevity]
pywikibot.exceptions.Error: loadrevisions: Query on [[pt:Usuário(a):Vitorvicentevalente]] returned data on 'Usuário:Vitorvicentevalente'
According to the API "Usuário" is a valid namespace alias[1]. Is there an easy workaround or fix here?
Also, I've noticed this is not an issue in trunk, it's just the rewrite branch that produces this error.
Footnotes:
1: ref: http://pt.wikipedia.org/w/api.php?action=query&meta=siteinfo&siprop=general…
Regards,
Morten
Hi all,
I've run into an interesting issue on Portuguese Wikipedia, with a user
page that's in the aliased user namespace:
import pywikibot;
site = pywikibot.getSite('pt');
page = pywikibot.Page(site, u"Usuário:Vitorvicentevalente");
page.title();
u'Usu\xe1rio(a):Vitorvicentevalente'
page.get()
[NOTE: callback trace removed for brevity]
pywikibot.exceptions.Error: loadrevisions: Query on
[[pt:Usuário(a):Vitorvicentevalente]] returned data on
'Usuário:Vitorvicentevalente'
According to the API "Usuário" is a valid namespace alias[1]. Is there an
easy workaround or fix here?
Also, I've noticed this is not an issue in trunk, it's just the rewrite
branch that produces this error.
Footnotes:
1: ref:
http://pt.wikipedia.org/w/api.php?action=query&meta=siteinfo&siprop=general…
Regards,
Morten
Hi Amir
the point is there may be an existing data item without language link to fa-wiki and a bot is able to create a new repository item for fa-wiki without having any language links (or having it's own link cluster). Before a bot creates a data repository item we must be be sure that a repository item does not already exists. I am unsure whether this is a job for bots.
sample:
existing item Q1234 with links en:Hello, fr:Bonjour, es:Hola
your bot find a new page de:Hallo without language links (or with link to nl:Hoi which has no entry in Q1234 yet).
You are wrongly able to create a new data item for de:Hallo via bot. But you first must check that the content of de:Hallo does not match any other content of any other data items's language links (which is in this sample the content of Q1234's language links). So you must not use this code snippet without having care on this matter.
Regards
xqt
----- Original Nachricht ----
Von: Amir Ladsgroup <ladsgroup(a)gmail.com>
An: Pywikipedia discussion list <pywikipedia-l(a)lists.wikimedia.org>
Datum: 26.02.2013 10:19
Betreff: Re: [Pywikipedia-l] Creating wikidata items
> Dear xqt, is it ok?
>
> # -*- coding: utf-8 -*-
> import wikipedia
> site=wikipedia.getSite('fa',
> fam='wikipedia')
> listofarticles=[u"???? ???????",u"???? ????"]
> for name in listofarticles:
> page=wikipedia.Page(site,name)
> data=wikipedia.DataPage(page)
> try:
> items=data.get()
> except wikipedia.NoPage:
> print "The item doesn't exist. Creating..."
> data.createitem("Bot: Importing article from Persian wikipedia")
> else:
> print "It has been created already. Skipping..."
> I tested it, it was ok. but I'm not sure
>
>
>
> On Tue, Feb 26, 2013 at 11:49 AM, <info(a)gno.de> wrote:
>
> > Hi folks,
> >
> > Reza1615 as published a small code snippet to create items at data
> > repository. Please use this sample with care becaus it does not test
> > whether a data repository item already exists. It only test whether it
> > exists for a given site page. This could also mean that a given site page
> > as no language link on a given repository page. This must be checked
> before
> > a page is created.
> >
> > Regards
> > xqt
> >
> > _______________________________________________
> > Pywikipedia-l mailing list
> > Pywikipedia-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
> >
>
>
>
> --
> Amir
>
>
> --------------------------------
>
> _______________________________________________
> Pywikipedia-l mailing list
> Pywikipedia-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
>