If you are running a copy of Wikibase, we have split out data model code
from WikibaseLib into a separate extension. (which can be used independent
of MediaWiki)
You need to have the WikibaseDataModel extension in the extensions
directory. It will automatically be loaded by Wikibase.
https://gerrit.wikimedia.org/r/#/admin/projects/mediawiki/extensions/Wikiba…
If you are missing WikibaseDataModel, then you may see an error:
Fatal error: Class 'Wikibase\EntityId' not found
Additionally, the WikibaseQuery, WikibaseQueryEngine and WikibaseDatabase
components have been split into separate extensions. These are only needed
for the newest experimental query stuff.
https://gerrit.wikimedia.org/r/#/admin/projects/?filter=Wikibase
To make things easier, Wikibase has a composer.json file to manage the
dependencies and you can use that, if you like.
--
Katie Filbert
Wikidata Developer
Wikimedia Germany e.V. | NEW: Obentrautstr. 72 | 10963 Berlin
Phone (030) 219 158 26-0
http://wikimedia.de
Wikimedia Germany - Society for the Promotion of free knowledge eV Entered
in the register of Amtsgericht Berlin-Charlottenburg under the number 23
855 as recognized as charitable by the Inland Revenue for corporations I
Berlin, tax number 27/681/51985.
Hi all,
The Guidelines for sourcing statements [1] had a 72% support. There are
still two issues that were raised during the discussion that need to be
addressed:
- Sources used in Wikipedia (templates cite doi cite book, etc): should
they be stored in Wikidata to be shared across all Wikipedias?
- Source entity type: since we now know that there is going a specific type
of item that will not have links to Wikipedia and that is somehow different
from normal items, would it be meaningful to create this new entity type?
This second issue was already discussed some time ago, but back then was
not very clear how the guidelines would look like. With the guidelines
completed, maybe now the perception has changed.
The new RfC can be found here:
https://www.wikidata.org/wiki/Wikidata:Requests_for_comment/Source_items_an…
Cheers,
Micru
[1]
https://www.wikidata.org/wiki/Wikidata:Requests_for_comment/References_and_…
While on the Hackathon I had the opportunity to talk with some people from
sister projects about how they view Wikidata and the relationship it should
have to sister projects. Probably you are already familiar with the views
because they have been presented already several times. The hopes are high,
in my opinion too high, about what can be accomplished when Wikidata is
deployed to sister projects.
There are conflicting needs about what belongs into Wikidata and what
sister projects need, and that divide it is far greater to be overcome than
just by installing the extension. In fact, I think there is a confusion
between the need for Wikidata and the need for structured data. True that
Wikidata embodies that technology, but I don't think all problems can be
approached by the same centralized tool. At least not from the social side
of it.
Wikiquote could have one item for each quote, or Wikivoyage an item for
each bar, hostel, restaurant, etc..., and the question will always be: are
they relevant enough to be created in Wikidata? Considering that Wikidata
was initially thought for Wikipedia, that scope wouldn't allow those uses.
However, the structured data needs could be covered in other ways.
It doesn't need to be a big wikidata addressing it all. It could well be a
central Wikidata addressing common issues (like author data, population
data, etc), plus other Wikidata installs on each sister project that
requires it. For instance there could be a data.wikiquote.org, a
data.wikivoyage.org, etc that would cater for the needs of each community,
that I predict will increase as soon as the benefits become clear, and of
course linked to the central Wikidata whenever needed. Even Commons could
be "wikidatized" with each file becoming an item and having different
labels representing the file name depending on the language version being
accessed.
Could be this the right direction to go?
Cheers,
Micru
Today we have reached the cleaning up of half of all our interwikiconflicts. With this a lot of interwikiconflicts on other Wikipedias have been solved as well.
Romaine
Over the last year, we have seen some discussion about if and how Wikidata can
be useful for Wikimedia Commons. One aspect of this is maintaining meta data as
structured data.
On behalf of the Wikidata development team, I just posted a proposal for this:
https://commons.wikimedia.org/wiki/Commons:Wikidata_for_media_info
We hope you regard this as an invitation to discuss the proposal and to identify
use cases that we do not cover with it.
Please use the proposal's talk page as a central place for discussion about
Wikidata and media meta data.
Please invite others involved with the Wikidata or Commons projects or otherwise
interested in media meta data to take part in the invitation.
Thank you,
Daniel Kinzler
Hello All ,
i was wondering if someone knows roughly the Rate of Wikidata changes per
minute or even per day , i tried to watch the Feed for a while but it
varies a lot
what would be the maximum and minimum rate , should we expect it also to
increase as a result of more contributions ?
i'm taking about updates posted in the RSS feed
here<http://www.wikidata.org/w/index.php?title=Special:RecentChanges&feed=atom>
thanks
Regards
-------------------------------------------------
Hady El-Sahar
Research Assistant
Center of Informatics Sciences | Nile University<http://nileuniversity.edu.eg/>
email : hadyelsahar(a)gmail.com
Phone : +2-01220887311
http://hadyelsahar.me/
<http://www.linkedin.com/in/hadyelsahar>
Heya folks :)
Here's your summary of what happened around Wikidata this week:
http://meta.wikimedia.org/wiki/Wikidata/Status_updates/2013_06_21
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Technical Projects
Wikimedia Deutschland e.V.
Obentrautstr. 72
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Hello,
I would like all interested in the interaction of Wikidata and Wiktionary
to take a look at the following proposal. It is trying to serve all use
cases mentioned so far, and remain still fairly simple to implement.
<http://www.wikidata.org/wiki/Wikidata:Wiktionary>
To the best of our knowledge, we have checked all discussions on this
topic, and also related work like OmegaWiki, Wordnet, etc., and are
building on top of that.
I would extremely appreciate if some liaison editors could reach out to the
Wiktionaries in order to get a wider discussion base. We are currently
reading more on related work and trying to improve the proposal.
It would be great if we could keep the discussion on the discussion page on
the wiki, so to bundle it a bit. Or at least have pointers there.
<http://www.wikidata.org/wiki/Wikidata_talk:Wiktionary>
Note that we are giving this proposal early. Implementation has not started
yet (obviously, otherwise the discussion would be a bit moot), and this is
more a mid-term commitment (i.e. if the discussion goes smoothly, it might
be implemented and deployed by the end of the year or so, although this
depends on the results of the discussion obviously).
Cheers,
Denny
--
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
I'm just back from the LODLAM summit in Montreal, Canada and here there is
a short report.
==About LODLAM and why I was there==
LODLAM (http://lodlam.net) is a gathering of people interested in LOD
(linked open data) and LAM (Libraries, Archives, and Museums), so I thought
it would be interesting to find partners and raise awareness about the
Wikisource revitalization effort, all this thanks to the Grants:IEG
support. The audience was very diverse, not only from cultural
institutions, but also from some research centers and private companies.
OKFN, Europeana, DPLA, and other big players had representatives there.
AFIK, I was the only person from the Wikimedia movement, so I ended up
representing "all things wiki", specially Wikidata. These spontaneous
activities are briefly described here [1].
The format of the event was that of an [[open-space technology]] gathering,
similar to unconferences.
Some information and reflexions to share:
== Rewards & contributor retention ==
During a talk about licenses (which dealt about the difficulties of having
content with different licenses), there were some mention about Datahub
[2], a recently launched project to share datasets, formerly known as ckan.
The discussion revolved around the reward that contributors get for
releasing their datasets. There was some consensus that "the use of the
released data is the reward", which lead to another debate about how to
convey data use to contributors. It can be complicated or simplified to
just leave a gratitude comment by the person using the dataset.
All this led me to think about the emotional vs rational rewards that users
(or institutions) obtain from contributing content to Wikipedia, Commons,
Wikisource, etc. Are really "active thanks", as currently implemented,
suistainable and scalable? Will all the contributors who deserve it get a
thanks some day? Could personalized view counts/ratings reports about
uploaded pictures, major contributions to WP articles, etc. have some
impact on contributor satisfaction/retention? Would "automated personal
impact reports" free collaborators from the duty of thanking one another,
or would that mean less personal interactions?
These are some questions that I leave open here.
==Semantic annotations ==
As you might know there is a GSoC [3] which aims to convert the OKFN
Annotator [4] into a Mediawiki extension. That is a great project that will
enable inline comments in mediawiki projects, but it shouldn't be seen as
the end, but only an step in the direction of semantic annotations.
What could semantic annotations mean for Wikipedia? More precise answers to
questions. Instead of just having "millions of articles" there would be the
possibility of answering "trillions of questions" (or at least pointing to
the text fragment(s) that has/have the answer). This kind of paradigm shift
might need some pondering and broad community discussion.
What could semantic annotations mean for Wikisource? Text
interconectedness. Be able to relate concepts, authors, fragments... and
then be able to query those relationships.
==Input interfaces for linked data==
The best linked data it is the one that is invisible to the user, but then,
how to enable end users to "write" linked data? From the several
approaches, the most convincing seemed to use a text symbol (#, +, !, or
others) to indicate that the text following it represents a linked entity.
In the case of the VisualEditor in Wikipedia, one could write
"#article_name", and right after entering the "#" and the first letters, a
list of options (from Wikidata) would show up to autocomplete/disambiguate.
After selecting the right item, one could continue writing or type a dot to
select a property (like in some object-oriented programming languages do).
This approach simplifies the interlinking and also the data inclusion.
==Other news==
- The Getty vocabularies will be published as linked open data (late 2013,
ODC_BY 1.0 license) [6]
- Pund.it [5] - open source semantic annotation project that won the lodlam
challenge award
- Karma, tools for mapping data to ontologies [7]
Cheers,
Micru
[1] http://lists.wikimedia.org/pipermail/wikidata-l/2013-June/002388.html
[2] http://datahub.io/
[3]
https://www.mediawiki.org/wiki/User:Rjain/Proposal-Prototyping-inline-comme…
[4] http://okfnlabs.org/annotator/
[5] http://www.thepund.it/
[6] http://www.getty.edu/research/tools/vocabularies/index.html
[7]
http://summit2013.lodlam.net/2013/06/20/karma-tools-for-mapping-data-to-ont…