I recently started following mediawiki/extensions/Wikibase on Gerrit,
and quite astonishingly found that nearly all of the 100 most recently
updated changes appear to be owned by WMDE employees (exceptions being
one change by Legoktm and some from L10n-bot). This is not the case, for
example, with mediawiki/core.
While this may be desired by the Wikidata team for corporate reasons, I
feel that encouraging code review by volunteers would empower both
Wikidata and third-party communities with new ways of contributing to
the project and raise awareness of the development team's goals in the
The messy naming conventions play a role too, i.e. Extension:Wikibase
is supposed to host technical documentation but instead redirects to the
Wikibase <https://www.mediawiki.org/wiki/Wikibase> portal, with actual
documentation split into Extension:Wikibase Repository
ignoring the fact that the code is actually developed in a single
repository (correct me if I'm wrong). Just to add some more confusion,
there's also Extension:Wikidata build
<https://www.mediawiki.org/wiki/Extension:Wikidata_build> with no
And what about wmde on GitHub <https://github.com/wmde> with countless
creatively-named repos? They make life even harder for potential
Finally, the ever-changing client-side APIs make gadgets development a
pain in the ass.
Sorry if this sounds like a slap in the face, but it had to be said.
I am investigating some concepts about signal processing and relating them
to data manipulation. It is somehow difficult because the way computer
scientists relate to concepts is very dogmatic, something is either black
or white, however I have not found much on "things that under certain
circumstances can be considered black-ish, and under another set of
circumstances can be considered white-ish"
In signal processing there is the concept of amplitude which is just the
signal strength. For humans language is like an amplitude communication
process where the receiver picks up not only the signal, but also its
amplitude depending on context, awareness, previous knowledge, etc. factors
which in turn can be considered waves being processed by the ontological
biological-organizational complex, the body-mind.
It is tough to describe that a certain concept might have a certain
amplitude in some situations and other amplitude in other situations, and
perhaps even harder to make a human interface for it.
Has anyone attempted it in the past? If Q items are not static entities,
what is the best way to convey that they are not? And is it possible or
desirable at all?
Perhaps these questions are more suitable for a Wikidata 2.0, or perhaps it
is already doable, who knows.
I'm a longtime OSM contributor. I like the idea of Wikidata and what I'm
really interested in is a sort of bridge between both projects.
Would somebody be interested in writing a game like application, which
would invoke JOSM Remote Control (JOSM RC) at the coordinates in the
wikidata item and
in the clipboard?
That would make it easier to add such tags to the OSM objects.
Hey folks :)
The rollout of arbitrary access on Dutch Wikipedia and French
Wikisource seems to be going well so we're going to continue the
rollout. The next projects will be:
* 18. May: Farsi Wikipedia, English Wikivoyage, Hebrew Wikipedia
* 1. June: Italian Wikipedia, all remaining Wikisource
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
I just noticed that we have a number of "orphaned items" which were
created and imported from some Wikipedia article that then got deleted.
The result is an item with almost no data, no sitelinks, and all
references claiming "imported from X Wikipedia".
Here is what happened:
It would be good to have a process for dealing with such cases. I am not
saying that we must delete such items immediately, but it seems obvious
that they need some special attention to become self-sustaining even
without Wikipedia articles associated.
Things that would be important to keep such items:
* Links to other external datasets that confirm the existence of the thing.
* Links to authoritative web sites that confirm the existence of the thing.
* Proper references for all data (we always want that, but here it's
even more critical: "imported from Wikipedia" is never great, but at
least it leaves some hope of finding proper references if the Wikipedia
page still exists).
In cases like the above, deletion seems to be the most reasonable
solution (the little data that is there can easily be added again if
needed in the future). It seems that one could automatically collect
such candidates for deletion (pages that are not used as property
values, have no site links, have no identifier properties, were not
edited since more than a month, an have less than, say, ten
Today a Wikidata Visualization Challenge starts, a competition aimed to
make it easier to understand the value of Wikidata, what is in there,
and/or how it is being created.
The challenge ends June 1 and there are some nice prizes available.
Checkout http://wvc.se for more details.
*Best regards,Jan Ainali*
CEO, Wikimedia Sverige <http://wikimedia.se>
*Tänk dig en värld där varje människa har fri tillgång till mänsklighetens
samlade kunskap. Det är det vi gör.*
Bli medlem. <http://blimedlem.wikimedia.se>