Dear Sir,
I thank you for your efforts. I was honoured to see your recent return to Wikidata mailing list. Your contributions were important for the Wikidata community to develop the project. Concerning the paper we began to write in 2016 about Medical Wikidata but has been interrupted due to many important reasons, I am honoured to inform you that it was accepted for publication in Journal of Biomedical Informatics and will be issued online within two weeks.
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
GLAM and Education Coordinator, Wikimedia TN User Group
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
____________________
+21629499418
-------- Message d'origine --------
De : Denny Vrandečić <vrandecic(a)google.com>
Date : 2019/09/21 00:55 (GMT+01:00)
À : "Discussion list for the Wikidata project." <wikidata(a)lists.wikimedia.org>
Objet : Re: [Wikidata] Personal news: a new role
Thanks everyone for this warm welcome (back)!
On Fri, Sep 20, 2019, 10:38 Denny Vrandečić <vrandecic(a)google.com<mailto:vrandecic@google.com>> wrote:
Off to my Todo list :)
On Thu, Sep 19, 2019 at 10:46 AM Andy Mabbett <andy(a)pigsonthewing.org.uk<mailto:andy@pigsonthewing.org.uk>> wrote:
On Thu, 19 Sep 2019 at 17:56, Denny Vrandečić <vrandecic(a)google.com<mailto:vrandecic@google.com>> wrote:
> I am moving to a new role in Google Research, akin to a Wikimedian in
> Residence
That's marvelous; congratulations.
Please bear in mind this project:
https://commons.wikimedia.org/wiki/Commons:Voice_intro_project
and the Googlers who have Wikipedia articles about them (or, indeed,
Wikidata items).
--
Andy Mabbett
@pigsonthewing
http://pigsonthewing.org.uk
_______________________________________________
Wikidata mailing list
Wikidata(a)lists.wikimedia.org<mailto:Wikidata@lists.wikimedia.org>
https://lists.wikimedia.org/mailman/listinfo/wikidata
Dear all,
I thank you for your efforts. To know more about word embedding and semantic similarity, please refer to the survey of our research group about the issue available at https://www.sciencedirect.com/science/article/pii/S0952197619301745. If you would like that we work on using these techniques to enrich Lexicographical Data on Wikidata, we will be honoured to do this. However, we will face two main problems. The first one is absolutely funding and the second one is that we need people to validate the information returned by these two techniques and adjust it if needed.
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM, Research and Education Coordinator, Wikimedia TN User Group
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
____________________
+21629499418
-------- Message d'origine --------
De : Thomas Douillard <thomas.douillard(a)gmail.com>
Date : 2019/09/20 12:08 (GMT+01:00)
À : "Discussion list for the Wikidata project." <wikidata(a)lists.wikimedia.org>
Objet : [Wikidata] Lexical datas and automated learning – where it is answered to « I don’t believe in Wikidata senses developpment »
I recently read the french sentence « Je ne crois pas au développement des sens. » — translation : I don’t believes senses with develop much (following links in a Wikidata Weekly summary, the slides on a french meeting about Wikidata lexicographical datas). I believe in it, (regardless of the arguments exposed in the slides), and I write this email to try to explain why.
I’m curious to know if there is already some work on the automated discovering of lexicographical datas / senses thanks to the help of Wikidata items.
There is tools for automated tagging of terms with the corresponding Wikidata item, that appeared on this mailing list and/or on the wikidata weekly summaries.
There is also methods that can discover senses into texts using only the terms with no reference to any external « sense » like https://towardsdatascience.com/word-embedding-with-word2vec-and-fasttext-a2… and can discriminate several usages of the same word according to the context.
Wikidata lexicographical datas and Wikibase items could close the loop between the 2 methods and allow us to semi automatically build tools that annotate texts with Wikidata items it there is something relevant in Wikidata, but if there is nono try to suggest to add datas on Wikidata, wether it’s a missing item or a missing sense for the term.
It may even be possible to store word embeddings generated by word2vec methods into Wikidata senses.
In conclusion, I think Wikidata senses will be used because they allow to close a gap. It does not depends only on a strong involvement in a volunteer traditional lexicographic community. If reasearchers of the language community dives into this and develop algorithms and easy to use tools to share there lexicographical datas in Wikidata, there could be a very positive feedback loop where numerous data ends to be added on Wikidata, where the store datas helps the algorithm to enrich text annotations, for example, and missing datas are semi automatically added thanks to user feedback.
This is all just wishful thinking, but I thought this deserved to be shared, hopefully this will launch at list a thread of ideas/comment in here :)
Thomas
I recently read the french sentence « Je ne crois pas au développement des
sens. » — translation : I don’t believes senses with develop much
(following links in a Wikidata Weekly summary, the slides on a french
meeting about Wikidata lexicographical datas). I believe in it, (regardless
of the arguments exposed in the slides), and I write this email to try to
explain why.
I’m curious to know if there is already some work on the automated
discovering of lexicographical datas / senses thanks to the help of
Wikidata items.
There is tools for automated tagging of terms with the corresponding
Wikidata item, that appeared on this mailing list and/or on the wikidata
weekly summaries.
There is also methods that can discover senses into texts using only the
terms with no reference to any external « sense » like
https://towardsdatascience.com/word-embedding-with-word2vec-and-fasttext-a2…
and can discriminate several usages of the same word according to the
context.
Wikidata lexicographical datas and Wikibase items could close the loop
between the 2 methods and allow us to semi automatically build tools that
annotate texts with Wikidata items it there is something relevant in
Wikidata, but if there is nono try to suggest to add datas on Wikidata,
wether it’s a missing item or a missing sense for the term.
It may even be possible to store word embeddings generated by word2vec
methods into Wikidata senses.
In conclusion, I think Wikidata senses will be used because they allow to
close a gap. It does not depends only on a strong involvement in a
volunteer traditional lexicographic community. If reasearchers of the
language community dives into this and develop algorithms and easy to use
tools to share there lexicographical datas in Wikidata, there could be a
very positive feedback loop where numerous data ends to be added on
Wikidata, where the store datas helps the algorithm to enrich text
annotations, for example, and missing datas are semi automatically added
thanks to user feedback.
This is all just wishful thinking, but I thought this deserved to be
shared, hopefully this will launch at list a thread of ideas/comment in
here :)
Thomas
Hi,
I am trying to bring some light into this query:
https://w.wiki/8Xv
Many of the listings have no labels in any language - is there a simple way to get the Wikipedia-title in whatever wp with the q-Number?
cheers
Olaf
Dr. Olaf Simons
Forschungszentrum Gotha der Universität Erfurt
Schloss Friedenstein, Pagenhaus
99867 Gotha
Büro: +49-361-737-1722
Mobil: +49-179-5196880
Privat: Hauptmarkt 17b/ 99867 Gotha
Good afternoon,
I'm Raffaele, AKA Japponcino.
Because of I'm an artist, I created with the help of a friend my Wikidata
page.
My Wikidata page is this: https://www.wikidata.org/wiki/Q64154408
My friend suggested me to add identifiers like socials, so I went to the
page I listed above and I tried to add identifiers, but I only found "add
statement" at the end of the page, while my friend has identifiers and can
add them. His page is this: https://www.wikidata.org/wiki/Q61758300
How can be this issue solved?
Best regards,
Raffaele
Hello all!
I'm Guillaume, you might have read about me already. At the moment,
I'm the primary contact for things related to Wikidata Query Service.
Feel free to ping me on Phabricator, or via direct email if you think
I can help unblock something.
We are in the process of hiring 2 new engineers to work on WDQS, I'll
keep you posted when that happens!
Thanks all!
Guillaume
--
Guillaume Lederrey
Engineering Manager, Search Platform
Wikimedia Foundation
UTC+2 / CEST
I am trying to synchronise FactGrid data with Wikidata.
A strange question: If I run a SPARQL search such as this one:
https://query.wikidata.org/#SELECT%20%3FGemeinde_in_Deutschland%20%3FGemein…
I get the coordinates with what looks like swapped values:
"Point(10.7183 50.9489)" on the QueryService table output
"50°56'56"N, 10°43'6"E" on P625 at the respective Wikidata Item Q6986
any idea why this is so?
Best,
Olaf
Dr. Olaf Simons
Forschungszentrum Gotha der Universität Erfurt
Schloss Friedenstein, Pagenhaus
99867 Gotha
Büro: +49-361-737-1722
Mobil: +49-179-5196880
Privat: Hauptmarkt 17b/ 99867 Gotha
Sorry for cross-posting!
Reminder: Technical Advice IRC meeting this week **Wednesday 3-4 pm UTC**
on #wikimedia-tech.
Questions can be asked in English, Persian, Spanish!
The Technical Advice IRC Meeting (TAIM) is a weekly support event for
volunteer developers. Every Wednesday, two full-time developers are
available to help you with all your questions about MediaWiki, gadgets,
tools and more! This can be anything from "how to get started" over "who
would be the best contact for X" to specific questions on your project.
If you know already what you would like to discuss or ask, please add your
topic to the next meeting:
https://www.mediawiki.org/wiki/Technical_Advice_IRC_Meeting
Hope to see you there!
--
Raz Shuty
Engineering Manager
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Phone: +49 (0)30 219 158 26-0
https://wikimedia.de
Imagine a world, in which every single human being can freely share in the
sum of all knowledge. That‘s our commitment.
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.