Based on all feedback gathered during the RfC about a possible
inter-project links interface [1], User:Tpt has created a content card
prototype (image: [2])
To activate it, follow these steps:
1. Go to your common.js file. For a user named "Test" in English
Wikipedia, it would be: https://en.wikipedia.org/wiki/User:Test/common.js
2. Modify it and paste this line: mw.loader.load('//
www.wikidata.org/w/index.php?title=User:Tpt/interproject.js&action=raw&ctyp…'
);
3. Save and go to any Wikipedia page
4. You should see an icon next to the article title, if you don't,
refresh your browser cache. *Instructions*: Internet Explorer: hold down
the Ctrl key and click the Refresh or Reload button. Firefox: hold down the
Shift key while clicking Reload (or press Ctrl-Shift-R). Google Chrome and
Safari users can just click the Reload button.
What it does:
- It displays an icon next to the article title
- When you hover your mouse over the icon it shows a *content card*.
- The content card displays information from Wikidata: label, image,
link to Commons gallery, and link to edit Wikidata.
What it is supposed to do in the future when Wikidata supports sister
projects:
- It will display contents or links to sister projects
Please leave your feedback on the Request for comments, thanks!
http://meta.wikimedia.org/wiki/Requests_for_comment/Interproject_links_inte…
Cheers,
Micru
[1]
http://meta.wikimedia.org/wiki/Requests_for_comment/Interproject_links_inte…
[2] http://commons.wikimedia.org/wiki/File:Content-card-prototype.png
Hi Daniel,
I started working on the DBpedia release and just wanted to check
what's the current status of the Wikidata dumps. I saw that RDF data
and RDF URIs like http://www.wikidata.org/entity/Q1 are already
available. Cool! Do you think there will be RDF dumps soon, i.e. in
the next few weeks?
If not, could you guys prepare a dump of the sitelinks table, as you
suggested below? If it's not too much effort, it would be cool if you
could generate CSV or a similar simple format. We won't put the stuff
into a DB, we just extract the data, and we would have to write a
parser for SQL insert statements. CSV would be much simpler.
Thanks a lot for your help!
Christopher
On 4 May 2013 23:36, Daniel Kinzler <daniel.kinzler(a)wikimedia.de> wrote:
> On 04.05.2013 19:13, Jona Christopher Sahnwaldt wrote:
>> We will produce a DBpedia release pretty soon, I don't think we can
>> wait for the "real" dumps. The inter-language links are an important
>> part of DBpedia, so we have to extract data from almost all Wikidata
>> items. I don't think it's sensible to make ~10 million calls to the
>> API to download the external JSON format, so we will have to use the
>> XML dumps and thus the internal format.
>
> Oh, if it's just the language links, this isn't an issue: there's an additional
> table for them in the database, and we'll soon be providing a separate dump of
> that at table http://dumps.wikimedia.org/wikidatawiki/
>
> If it's not there when you need it, just ask us for a dump of the sitelinks
> table (technically, wb_items_per_site), and we'll get you one.
>
>> But I think it's not a big
>> deal that it's not that stable: we parse the JSON into an AST anyway.
>> It just means that we will have to use a more abstract AST, which I
>> was planning to do anyway. As long as the semantics of the internal
>> format will remain more or less the same - it will contain the labels,
>> the language links, the properties, etc. - it's no big deal if the
>> syntax changes, even if it's not JSON anymore.
>
> Yes, if you want the labels and properties in addition to the links, you'll have
> to do that for now. But I'm working on the "real" data dumps.
>
> -- daniel
>
Heya folks :)
Lazy weekend ahead? Why not have a look at the past week on Wikidata:
http://meta.wikimedia.org/wiki/Wikidata/Status_updates/2013_06_07
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Technical Projects
Wikimedia Deutschland e.V.
Obentrautstr. 72
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Hi everyone,
I'm happy to let you know that Yandex, an internet company from
Russia, made a donation of 150000 Euro to Wikimedia Deutschland for
further development of the core of Wikidata. It's great to see more
companies stepping up in not only using Wikidata but actively
supporting its development.
The official press release is at
https://www.wikimedia.de/wiki/Pressemitteilungen/PM_06_13_Wikidata_Yandex
Please let me know if you have any questions.
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Technical Projects
Wikimedia Deutschland e.V.
Obentrautstr. 72
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Hi!
To celebrate that Wikidata has received a fairly-sized donation (more
Wikidata, yay!), I would like to share with you some ideas about sources
for statements.
The first one, proposed by TomT0m, addresses the problem of linking one
source to several statements. It could be a two column layout where on one
side sources would be displayed or would be added, and on the other side
the statements, to connect with those sources. Another option could be an
applet to display all sources used in that item, search or add more, plus
the ability to drag&drop a source onto a statement.
Then we have the issue of how to import existing sources from Wikipedia or
free external sources, like Open Library, to support claims in Wikidata.
The main problem is the transform of text into the corresponding items:
check if an item exists, and if not create it when appropriate (author,
publisher, etc). It can be done manually, but it is tedious, so some tool
might help there to suggest which items to link to, and which ones to
create.
Verification of web linked sources is also a major issue. How to know if
the data on a web page is still current or if it has suffered changes? To
address this I propose using the OKFN Annotator together with the datatype
url. That would allow the user to "annotate" any portion of the web being
linked, store the quotation, and then a crawler would be able to check
automatically if the linked fragment has been updated. It would also
highlight the text used as source when visiting the web site. There is a
GsoC that aims to convert the Annotator into a Mediawiki extension to
comment Wikipedia articles, and I believe it could be used to annotate
texts in Wikisource (that was my original involvement in the project) and
eventually linked websites in Wikidata.
https://bugzilla.wikimedia.org/show_bug.cgi?id=46440
Finally, as a reminder, when creating item-sources this might be useful to
create missing items: https://bugzilla.wikimedia.org/show_bug.cgi?id=49068
And this for checking if the linked item is the right one:
https://bugzilla.wikimedia.org/show_bug.cgi?id=49067
Thanks,
Micru
Hi, I asked this question at
http://www.wikidata.org/wiki/Help_talk:Lua
but I'm asking here again just in case.
In my own free time, I'm trying to reimplement w:Template:Flagicon on
Lua just for the fun of it - see the small announcement.
http://en.wikipedia.org/wiki/Wikipedia_talk:WikiProject_Flag_Template#Templ…
There is a table where images of flags in Commons are identified with
the article of their territory in the project where the module is hosted:
http://en.wikipedia.org/wiki/Module:Sandbox/QuimGil/FlagTranslations
Could Wikidata help here?
For instance at http://www.wikidata.org/wiki/Q228 we have all the ISO
codes related to "Andorra", the flag image and the translations in all
Wikimedia projects. It would be amazing if this data could be leveraged.
Maybe not for each template query (that could be expensive) but at least
to populate and maintain those local tables automatically. The master
table would be the only one maintained manually and all it would do is
to state that "Andorra" = Q228, "Berlin" = Q64...
Even that master table could be highly automated one day because we do
have the ISO 3166 entries etc in Wikidata... but let's go step by step. :)
--
Quim Gil
Technical Contributor Coordinator @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil