Freebase was launched to be a “Wikipedia for structured data”, because in
2007 there was no such project. But now we do have Wikidata, and Wikidata
and its community is developing very fast. Today, the goals of Freebase
might be better served by supporting Wikidata [1].
Freebase has seen a huge amount of effort go into it since it went public
in 2007. It makes a lot of sense to make the results of this work available
to Wikidata. But knowing Wikidata and its community a bit, it is obvious
that we can not and should not simply upload Freebase data to Wikidata:
Wikidata would prefer the data to be referenced to external, primary
sources.
In order to do so, Google will soon start to work on an Open Source tool
which will run on Wikimedia labs and which will allow Wikidata contributors
to find references for a statement and then upload the statement and the
reference to Wikidata. We will release several sets of Freebase data ready
for consumption by this tool under a CC0 license. This tool should also
work for statements already in Wikidata without sufficient references, or
for other datasets, like DBpedia and other machine extraction efforts, etc.
To make sure we get it right, we invite you to participate in the design
and development of this tool here:
https://www.wikidata.org/wiki/Wikidata:Primary sources tool
I hope you are as excited as I am about this project, and I hope that you
will join me in making this a reality. I am looking forward to your
contributions!
[1] https://plus.sandbox.google.com/109936836907132434202/posts/bu3z2wVqcQc
Hey everyone :)
Wow. What a day... Hard to beat that but let me try anyway.
We have just deployed new code. This includes a first version of
language fallbacks, a new datatype to link to properties, fixes for a
nasty focus issue when adding statements and more performance
improvements. Language fallbacks so far only work for the linked items
and properties in statements. It does not take into account your babel
box. The entity selector does not do fallbacks yet. I'd love to have
your feedback feedback on what we have so far to see how we need to
further improve language fallbacks.
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Hi all,
Wikimedia New York City will be hosting a Wikidata hackathon and beginners
workshop this coming Sunday. This will be a good event to meet Wikimedians
involved with cultural institutions, structure a bunch of data, and help
new users.
If you're in the area, come!
When:
Sunday, December 14,1:00 - 5:00 PM
Where:
55 Washington Street, Brooklyn, NY 11201
Room 321 (BLIP Outpost)
Details and sign up:
https://en.wikipedia.org/wiki/Wikipedia:Meetup/NYC/December_Wikidata
Cheers,
Eric
https://www.wikidata.org/wiki/User:Emw
Hi,
There's this bug:
https://phabricator.wikimedia.org/T35704
Basically, the "Nearby" function in the Wikipedia Android app can only work
if the coordinates template in the Wikipedia in the relevant language uses
the magic word from the GeoData extension.
And I wonder: Is this really needed? Updating templates in almost 300
languages doesn't scale well, and Wikidata already supports coordinates. It
also makes more general sense to me to query a structured database like
Wikidata instead of poking around with templates and magic words as it is
done with GeoData.
But that's me, and I might be missing something.
Is Wikidata actually ready for this technically?
Are coordinates filled for all the relevant items, or is it still better
supported in Wikipedias?
Thanks.
--
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
http://aharoni.wordpress.com
“We're living in pieces,
I want to live in peace.” – T. Moore
Are there are any tools currently that could do something like this:
Input: A category/class - something that hangs off subclassOf P279
relationships. For example: "gene" Q7187
Output: an interactive visual representation of the properties that are
being used by the entities connected to this category entity as its
members: e.g the properties used by RELN Q414043.
The interactive part would be to optionally expand the visualization to
include the equivalent representation for the objects of selected
properties. e.g. an expanding tree of properties for example.
I'm looking for something akin to a schema inspector for a more typical
database. The reason for this request is to ease understanding and
communication about data structures that go beyond single entities. In my
example here, we have important information that people typically associate
with the concept of 'gene', but to get to it, you need to hop through the
connection between the gene and protein representations first. For
example. RELN(gene) encodes reelin(protein) interacts with VLDR(protein).
And this is a relatively simple example.
This would be very helpful for the molecular biology wikiproject as we work
out what are likely to be increasingly complex structures of the data we
are importing. I imagine such a tool could also be very useful for other
groups and could also be useful for a general visual query composer.
Thoughts?
thanks
-Ben
Hello,
sorry for crossposting. I would like to inform you about the following job offer: The Information Center for Education of the German Institute for International Educational Research (DIPF, Germany) is looking for a Software Engineer in Web Technologies (mainly Semantic MediaWiki development) in the context of the BMBF-founded Frankfurt eHumanities-Center (FeHZ) at the next possible date (limited in time till 30.11.2017, 50% of regular weekly working time, payment EG 12 TV-H, Reference-No. IZB 2014-09, deadline is 06.01.2015).
The complete and official job offer is published in German here:
http://www.dipf.de/de/dipf-aktuell/stellenangebote/softwareentwickler-in-we…
Best regards
Christoph Schindler
------------------------//
German Institute for International Educational Research (DIPF, Germany)
Information Center for Education
+49 (0)69 24708-373
schindler(a)dipf.de
I think the redirects issue has been raised here to make up for the poor
Wikidata-Wikipedia integration we currently have.
I imagine Winter showing a list of "related articles in other languages".
Reusing one of the examples: when viewing [[en:Rob Bourdon]], the
software should infer from [[d:Q19205]] that Rob Bourdon is /member of/
Linkin Park (Q261), for which the German Wikipedia has an article, that
will in turn be linked. Very little brain work is needed to understand
that the German article about the band is likely to contain information
about individual members.
While this may sound Reasonator-ish, if correctly implemented in
Wikibase it could improve interlanguage and interproject links in a way
that just cannot be achieved with redirects. Imagine the Wikipedia
entries for "Linkin Park" linking (pun intended) to Wikiquote entries of
each member.
For more than a year I am asking users to add their articles to Wikidata
when they have written it. That seems succesful, they added their articles
more and more and did understand how to do that. Until recently. Now I get
more and more complaints from users that they do not understand any more
how to add a newly written article to an item. They seem to have tried, but
fail in actual getting it managed. That is a worse development!
Romaine