Finally the bug is solved. you can get and set items on wikidata via PWB
At first you must define a wikidataPage and these are methods you can
use. I'll try to expand and make more methods but i don't know any and
your help will be very useful:
Supports the same interface as Page, with the following added methods:
setitem : Setting item(s) on a page
getentity : Getting item(s) of a page
these examples worked for me:
site=wikipedia.getSite('wikidata',fam='wikidata')
page=wikipedia.wikidataPage(site,"Helium")
text=page.getentity()
page.setitem(summary=u"BOT: TESTING FOO",items={'type':u'item',
'label':'fa', 'value':'OK'})
page.setitem(summary=u"BOT: TESTING GOO",items={'type':u'description',
'language':'en', 'value':'OK'})
page.setitem(summary=u"BOT: TESTING BOO",items={'type':u'sitelink',
'site':'de', 'title':'BAR'})
Cheers!
--
Amir
Hi everyone
I added wikidata edits on PWB. for now it's basic and you can only change
or add or remove labels but i have plan to imporve it.
You can add wikidata family to your user-config.py and run a simillar code:
import wikipedia
site=wikipedia.getSite('wikidata',fam='wikidata')
page=wikipedia.Page(site,"Q321")
page.put(u"",u"Bot:
testing",wikidata=True,labelwikidata=u"no",valuewikidata=u"Test FOO")
I ran that and it
worked<http://wikidata-test-repo.wikimedia.de/w/index.php?title=Q321&curid=9133&di…>but
if there is any bugs feel free to tell me
Cheers
--
Amir
Hello all,
A secondary subject in the git migration is the layout of the new
repositories. In svn, it's standard to have multiple projects within
the same repository (such as the extensions for mediawiki), while in
git, it's common to split these.
In our repository, we have the following projects:
- pywikiparser
- threadedhttp
- pywikipedia
which clearly should be in three seperate repositories. However, I
think it might make sense to split up the pywikipedia repository a bit
further:
- spelling in a seperate repository
- split bots from the framework
this is already somewhat the case for rewrite, but not for trunk.
We could even consider having a repository per bot, but that might be
a bit too much :-)
- i18n: can be either in the bots repository or in a seperate one. I
think the former makes more sense.
- split off family files
- split off userinterfaces (?)
and I think we should also split off the third party libraries - and
maybe remove them altogether. It might make sense to package them in
the nightlies, though.
I've started some work at https://github.com/pywikibot/svn2git - see
the scripts-pwb directory for the code. You can use ./sync to download
the entire repository (with history) to experiment - I'll try to get a
clearer README in there soon.
Any opinions?
Merlijn
Hi folks around the world,
there was a debate in Huwiki about cosmetic changes which have a very
limited popularity. One of the problems is that we have Flagged Revs, and
when a bot edits an unpatrolled page, it will be hard to overview changes
and difficult to patrol it after cosmetic changes. Patrollers complain
about this a lot. Here is an example:
*difflink before
cc<http://hu.wikipedia.org/w/index.php?title=Az_Amerikai_Egyes%C3%BClt_%C3%81l…>
*difflink after
cc<http://hu.wikipedia.org/w/index.php?title=Az_Amerikai_Egyes%C3%BClt_%C3%81l…>
Is it possible to distuinguish patrolled and unpatrolled versions?
I think the ideal solution this way:
*If the wiki has Flagged Revs, cosmetic changes won't work on unpatrolled
pages as default behaviour
*A new parameter could be introduced to force cc on these pages
*For patrolled pages and for wikis without Flagged Revs the process won't
change
Could somebody please implement this solution? Otherwise cc may easily be
prohibited in Huwiki.
Thanks in advance and Merry Christmas!
--
Bináris
Some remarks from me. Sorry I am not quite familar with git/gerrit vcs as well as svn. I just use these things as long they work for me. I use tortoiseSVN for the svn repository and I played a bit with tortoiseGit and mw repository samples. I guess the only thing I need for git repository is push access via labs. I hope so.
I think for bot operators it is quite similar to checkout the git or svn repository or at least use the nightly dump.
I agree with Martin to move closer to mw and not to spread over different sites. This is also valid for documentation which is still messy. But bugzilla seems not to be an alternative for sf yet. So we should not move outside the mw family only und just to keep svn.
If we could have github as a mirror und the main repository at mw, why not. But I cannot estimate whether this has any benefits.
Regards
xqt
Hi, I just made a proposal on how to change API to better support simple
"continue" scenarious -- see
http://lists.wikimedia.org/pipermail/mediawiki-api/2012-December/002768.html ,
and would like to get some feedback from pywiki community. Would this
simplify internal API use inside pywiki? What are the biggest issues with
scripts when using API?
On the pywiki side, I am thinking of reworking query module in this
direction, and help with migrating all API requests through it. There could
always be two levels for the script authors - low level, where individual
API parameters are known to the script writer and the result is returned as
a dict(), and high level - with most common features offered by API are
wrapped in methods, yet the speed is almost the same as low level (multiple
data items returned per web request).
pg1 = Page(u'Python')
pg2 = Page(u'Cobra')
pg3 = Page(u'Viper')
pages = [pg1, pg2, pg3]
params = {'prop': 'links', 'pllimit' : 'max', 'titles': pages}
# QueryBlocks -- run query until there is no more "continue", return the
dictionary as-is from each web call.
for block in pywiki.QueryBlocks(params):
# Process block
# QueryPages -- will take any query that returns a list of pages, and yield
one page at a time. The individual page data will be merged accross
multiple API calls in case it exceeds the limit. This method could also
return pre-populated Page objects.
for page in pywiki.QueryPages(params):
# process one page at a time
# Page object will have its links() property populated
# List* methods work with list= API to request all available items based on
the parameters:
for page in pywiki.ListAllPages(from=u'T', getContent=True, getLinks=True):
# each page object will be prepopulated with links and page content
Thanks! Any feedback is helpful =)
--Yuri
I was changing about 150 articles with replace.py, but there is now
bad value in infobox.
Some of these articles were modified after it.
I want to revert them to the last version before my bots edit, but
revertbot.py cannot work with file.
How to do it with pywikipedia bot?
JAnD
Hi all,
I would like to request commit access. Currently I'm an administrator on the
Bosnian Wikipedia (Edinwiki) and heavily working with pywikipediabot.
During my work with pywikipediabot I localized a bunch of messages (i18n)
for personal use which I would like to commit to the SVN repository.
Furthermore, I noticed a few bugs in multiple scripts of which commonscat.py
is one. There are a few pages on which the script (commonscat.py) breaks
(i.e. http://bs.wikipedia.org/wiki/Colt_M1911) .
Besides these bugs there are some conventions on the Bosnian Wikipedia that
we would also like to see implemented in the fixes.py script. For example,
the naming of reference sections which is not always done consistently by
the users.
These are the major things that I could work on. However, there are also
other both minor and major changes that I would like to suggest, which I
could communicate through the mailing list on a later moment. Needless to
say I hope, but I'm familiar with Python.
If you need more information or if there are additional requirements then
let me know.
Kind regards,
Edin
Hi everyone,
Unlike previous years the big European Hackathon won't be in Berlin, but
in Amsterdam. We're aiming to do the hackathon in May 2013 with a
preference for the weekend of Saturday the 25th. To make sure this is a
good weekend I've set up a straw poll at
https://www.mediawiki.org/wiki/Amsterdam_Hackathon_2013#Straw_Poll .
Please fill it out so we can finalize the date!
Thank you,
Maarten
Wikimedia Nederland
Ps. Please forward to any relevant lists I might have missed.