Hey folks,
we plan to drop the wb_entity_per_page table sometime soon[0], because
it is just not required (as we will likely always have a programmatic
mapping from entity id to page title) and it does not supported non
-numeric entity ids as it is now. Due to this removing it is a blocker
for the commons metadata.
Is anybody using that for their tools (on tool labs)? If so, please
tell us so that we can give you instructions and a longer grace period
to update your scripts.
Cheers,
Marius
[0]: https://phabricator.wikimedia.org/T95685
Hello everyone,
I am looking for a text corpus that is annotated with Wikidata entites.
I need this for the evaluation of an entity linking tool based on
Wikidata, which is part of my bachelor thesis.
Does such a corpus exist?
Ideal would be a corpus annotated in the NIF format [1], as I want to
use GERBIL [2] for the evaluation. But it is not necessary.
Thanks for hints!
Samuel
[1] https://site.nlp2rdf.org/
[2] http://aksw.org/Projects/GERBIL.html
Hello all,
The international Wikimedia conference will settle in Montreal (Q340)
<https://www.wikidata.org/wiki/Q340> on August 11-13. As the call for
submission is about to start, let's talk about what you would like to see
happening about Wikidata.
Do you have a project of submissing a talk, workshop, meetup? Which topics
would you like to talk about with the development team? Let's talk about
our ideas on Wikidata:Wikimania 2017
<https://www.wikidata.org/wiki/Wikidata:Wikimania_2017>!
Thanks,
--
Léa Lacroix
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
Hi folks!
My name is Glorian Yapinus, but you can simply call me Glorian ;) . For the
next 6 months, I will assist Lydia in supporting you all.
Regarding to my educational background, I hold a bachelor's degree in
Information Technology and currently, I am working on my Master's in
Software Engineering and Management.
I am a warm and nice person. So, please do not hesitate to reach out to me
for any queries :-)
Last but not least, I am looking forward to working with you.
Cheers,
Glorian
--
Glorian Yapinus
Product Management Intern for Wikidata
Imagine a world, in which every single human being can freely share in the
sum of all knowledge. That‘s our commitment.
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
How can I help push this along ?
https://phabricator.wikimedia.org/T141813
After playing with the service more, I realized that we could allow some
cool integration directly in OpenRefine and with reconciling ... if only
the SPARQL Query service had FTS.
I want and need to be able to do this:
SELECT ?item ?itemLabel ?_image
WHERE
{
?item wdt:P178 wd:*"infocom".*
SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
OPTIONAL { ?item wdt:P18 ?_image. }
}
-Thad
+ThadGuidry <https://www.google.com/+ThadGuidry>
Hi Wikidatans,
Inspired by the Zika corpus project at WikiCite 2016,[1] I wanted to see if
it was (a) possible and (b) practical to set up an automatically updating
table (ideally for use in Wikipedia but pulling from Wikidata) that would
allow the Black Lunch Table Project initiative on Wikipedia to automate and
customize their Task List -- which is somewhat massive.[2]
Heather Hart from Black Lunch Table holds editathons all over North
America,[3] and I think they will be intersectional with some of the other
Wikipedia initiatives. So for events in places like North Carolina or New
Orleans, it would be helpful for her to be able to pull artists from those
communities. And for Art+Feminism in March, it would be helpful for her to
be able to pull artists who are female-identifying.
We set up some categories as a way to granulate a possible SPARQL query:
- Wikipedia category:
https://en.wikipedia.org/wiki/Category:Visual_artists_of_the_African_diaspo…
- Wikimedia Commons category:
https://commons.wikimedia.org/wiki/Category:Visual_artists_of_the_African_d…
Wikidata item:
https://www.wikidata.org/wiki/Q28654190
Right now the task list is approaching 1,000 entries, and I assume the task
list will only grow as it is a crowdsourced list. It is understood that
obviously not all entrants are notable, and many are at mid-career levels,
so even a stub might be stretching it to be on Wikipedia. But a solid
percentage are definitely notable and are deserving of pages.
I was also thinking that this sort of functionality would be helpful for
other initiatives -- maybe also Art+Feminism -- so this process might be
transferrable for others too.
Goal: To automate the task list process somewhat.
And at minimum would be a good Wikidata project.
I think that even exploring the possibilities here has been very fruitful
and illustrative of Wikidata's functionality for both myself and
Heather/Black Lunch Table Project. I think it might also provide very
positive outreach for others as well.
Best,
- Erika
[1]
https://www.wikidata.org/wiki/Wikidata:WikiProject_Source_MetaData/Wikidata…
[2]
https://en.wikipedia.org/wiki/Wikipedia:Meetup/Black_Lunch_Table/Lists_of_A…
[3]
https://en.wikipedia.org/wiki/Wikipedia:Meetup/Black_Lunch_Table/Event_Arch…
*Erika Herzog*
Wikipedia *User:BrillLyle <https://en.wikipedia.org/wiki/User:BrillLyle>*
Hi all,
If you look in the recent changes, most items have labels in English and
those are shown in the recent changes and elsewhere (so we know what the
item is about without opening first). But not all items have labels, and
these items without English label are often items with only a label in
Chinese, Arabic, Cyrillic script, Hebrew, etc. This forms a significant gap.
Is there a way to easily make a transcription from one language to another?
Or alternatively if there is a database that has such transcriptions?
Also the other way round might be helpful for users of Wikidata that
use/read it in Chinese, Arabic, Cyrillic script, Hebrew, etc.
Thanks!
Romaine
Hi everyone,
Currently Wikidata Query Service does not support federated queries.
Stas has been working on making it possible to enable federated
queries for a limited number of SPARQL endpoints. We have now opened a
page to get suggestions and agreement on the first endpoints to
support. Please add your comments and suggestions:
https://www.wikidata.org/wiki/Wikidata:SPARQL_federation_input
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207.