Greetings,
(This message is also posted at the Commons Village Pump, and the SDC talk
page)
The Structured Data on Commons [0] team plans to release support for
depicts statements this week, on Thursday, 18 April. The community's
testing over the past several weeks [1] helped identify and fix issues
before launch, and the development team spent time setting up extensive
internal testing to make sure the release goes as well as possible.
This release is very simple, with only the most basic depicts statements
available. There is a significant amount of technological change happening
with this project, and this release contains a lot of background change
that the team needs to make sure works fine live on Commons before adding
further support. More parts to depicts statements, and other statements,
will be released within the next few weeks.
A page for depicts has been set up at Commons:Depicts [2] As I can't
actually write instructive Commons policy or guidelines, I encourage those
who have tried out simple depicts tagging add a few lines to the page
suggesting proper use of the tool. I also encourage the use to be
conservative at first, as we wait for more advanced features within the
coming month or two as additional statement support goes live.
I'll keep the community updated as the plans progress throughout the week,
the team will know better within the next day or two if things are
definitely okay to proceed with release.
0. https://commons.wikimedia.org/wiki/Commons:Structured_data
1.
https://commons.wikimedia.org/wiki/Commons:Structured_data/Get_involved/Fee…
2. https://commons.wikimedia.org/wiki/Commons:Depicts
--
Keegan Peterzell
Community Relations Specialist
Wikimedia Foundation
Dear Mr.,
I thank you for your answer. This is just what I meant. If Wikidata supports Arabic dialects, there will be no need to create tens of Wikipedias and Wiktionaries.
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM and Education Coordinator, Wikimedia TN User Group
Member, WikiResearch Tunisia
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
Founder, TunSci
____________________
+21629499418
-------- Message d'origine --------
De : Gerard Meijssen <gerard.meijssen(a)gmail.com>
Date : 2019/04/21 16:18 (GMT+01:00)
À : Discussion list for the Wikidata project <wikidata(a)lists.wikimedia.org>
Objet : Re: [Wikidata] Wikidata and Arabic dialects
Hoi,
All Arabic languages are in principle eligible for localisation in translatewiki.net<http://translatewiki.net>. At the time, the WMF board was not all too pleased with an Egyptian Wikipedia. Having said that, localisation is not a problem from a language committee position. Having new projects in Arabic family languages requires the blessing of the board.
Thanks,
GerardM
On Sun, 21 Apr 2019 at 16:05, Houcemeddine A. Turki <turkiabdelwaheb(a)hotmail.fr<mailto:turkiabdelwaheb@hotmail.fr>> wrote:
Dear all,
I thank you for your efforts. It was a long time since I began working on the issue of Arabic dialects in wikis. This issue is complicated as it has many significant sides such as the linguistic side and the sociopolitical side. In 2008, Language Committee approved the creation of Egyptian Wikipedia. Nowadays, many of the articles of this Wikipedia are written in Modern Standard Arabic and not in Egyptian Arabic. Moreover, most of the articles written in Egyptian Wikipedia are stubs. This can be explained by the difficulty of writing Egyptian and finding technical terms in this language as well as many other Arabic dialects due to the lack of standardization. In 2017, I proposed in AICCSA 2017 conference (ERA C Class Conference) the that Wikidata can be used a knowledgebase for Arabic dialects (https://www.researchgate.net/publication/321039195_Using_WikiData_as_a_Mult…). In fact, we can be simply done by adding labels in Arabic dialects to all Wikidata entities and we will not face the matters faced by active Egyptian Wikipedia editors as we do not need to formulate statements into sentences to put them in Wikidata. Furthermore, the database can be useful for a variety of other important tasks such as Named Entity Recognition. All what we should do is to translate Mediawiki system messages to Arabic dialects and add labels, descriptions and aliases in Arabic dialects. Since 2017, I worked to add support to Tunisian Arabic in Wikidata so that I can convince users of the accuracy of this solution. As you can know, Tunisian is represented in Mediawiki with three language codes: aeb, aeb-arab (Arabic Script) and aeb-latn (Latin Script: Arabizi). I added labels, descriptions and aliases in aeb-arab and aeb-latn. However, aeb remained without labels. I ask if you can add a module to Wikidata that adds any label, description or alias of aeb-arab to aeb. I also ask about your opinions about this Wikidata project.
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM and Education Coordinator, Wikimedia TN User Group
Member, WikiResearch Tunisia
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
Founder, TunSci
____________________
+21629499418
_______________________________________________
Wikidata mailing list
Wikidata(a)lists.wikimedia.org<mailto:Wikidata@lists.wikimedia.org>
https://lists.wikimedia.org/mailman/listinfo/wikidata
Dear Mr.,
I thank you for your answer. I agree. We do not need aeb. In fact, we have an Arabic and a Latin Script for Tunisian. Concerning the long discussions on this issue, the problem is that before 2015 there were no reference linguistic research paper about the Latin Script. That is why we decided to create a formal Latin Script and apply it in our Wikimedia projects. However, the project failed as we failed to convince people to use the new writing system. But, nowadays, Arabizi, the main Latin Script for Tunisian, is efficiently described by the research literature. We currently have five detailed descriptions of Tunisian Arabizi. So, we can use it. Consequently, we can write the native name of AEB-LATN as "Tounsi".
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM and Education Coordinator, Wikimedia TN User Group
Member, WikiResearch Tunisia
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
Founder, TunSci
____________________
+21629499418
-------- Message d'origine --------
De : Gerard Meijssen <gerard.meijssen(a)gmail.com>
Date : 2019/04/22 07:16 (GMT+01:00)
À : Discussion list for the Wikidata project <wikidata(a)lists.wikimedia.org>
Objet : Re: [Wikidata] Wikidata and Tunisian Arabic
Hoi,
Having both aeb, aeb-arab and aeb-latn available is wrong. There should be aeb that is implicitly in the default Arabic script. When you want to have both Latin and Arabic as default you do not need the aeb.
Thanks,
GerardM
On Mon, 22 Apr 2019 at 02:54, Houcemeddine A. Turki <turkiabdelwaheb(a)hotmail.fr<mailto:turkiabdelwaheb@hotmail.fr>> wrote:
Dear all,
I thank you for your answers. I am sure that using Wikidata to support Arabic dialects including Tunisian is simpler. Effectively, all what should be done is to translate Mediawiki system messages and add labels, descriptions and aliases to all Wikidata items and properties. However, we are facing several problems. Now, according to https://m.wikidata.org/wiki/User:Pasleim/Language_statistics_for_items, we have added 3,711 labels, 3,324 descriptions and 217,740 aliases in Arabic Script. Although we were behind the labels and descriptions, we do not find how the aliases were added. I ask if I can see and track these aliases so that I can verify if they are accurate. We found as well that 196,119 labels, 3 descriptions and 23,581 aliases were added in Latin Script (AEB-LATN). After verification, we found that Mr. Marcus Cyron is who added the important output for Latin Script. In fact, he automatically considers the English name of a European or American person as the label of this person in AEB-LATN. If we consider Arabizi as the Latin Script of Tunisian, the rule should be "The French name of a person is considered de facto as the label of this person in AEB-LATN except when the person was born or lives in an OIC country (https://en.m.wikipedia.org/wiki/Member_states_of_the_Organisation_of_Islami…)". Mr. Marcus Cyron is kindly required to adjust this rule and he is thanked in advance to apply it on all Wikidata entities. He can also apply the rule to non-OIC towns. Concerning AEB, I ask if a module can be added to Wikidata so that it automatically considers all labels, descriptions and aliases of AEB-ARAB as the ones of AEB.
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM and Education Coordinator, Wikimedia TN User Group
Member, WikiResearch Tunisia
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
Founder, TunSci
____________________
+21629499418
_______________________________________________
Wikidata mailing list
Wikidata(a)lists.wikimedia.org<mailto:Wikidata@lists.wikimedia.org>
https://lists.wikimedia.org/mailman/listinfo/wikidata
Dear all,
I thank you for your answers. I am sure that using Wikidata to support Arabic dialects including Tunisian is simpler. Effectively, all what should be done is to translate Mediawiki system messages and add labels, descriptions and aliases to all Wikidata items and properties. However, we are facing several problems. Now, according to https://m.wikidata.org/wiki/User:Pasleim/Language_statistics_for_items, we have added 3,711 labels, 3,324 descriptions and 217,740 aliases in Arabic Script. Although we were behind the labels and descriptions, we do not find how the aliases were added. I ask if I can see and track these aliases so that I can verify if they are accurate. We found as well that 196,119 labels, 3 descriptions and 23,581 aliases were added in Latin Script (AEB-LATN). After verification, we found that Mr. Marcus Cyron is who added the important output for Latin Script. In fact, he automatically considers the English name of a European or American person as the label of this person in AEB-LATN. If we consider Arabizi as the Latin Script of Tunisian, the rule should be "The French name of a person is considered de facto as the label of this person in AEB-LATN except when the person was born or lives in an OIC country (https://en.m.wikipedia.org/wiki/Member_states_of_the_Organisation_of_Islami…)". Mr. Marcus Cyron is kindly required to adjust this rule and he is thanked in advance to apply it on all Wikidata entities. He can also apply the rule to non-OIC towns. Concerning AEB, I ask if a module can be added to Wikidata so that it automatically considers all labels, descriptions and aliases of AEB-ARAB as the ones of AEB.
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM and Education Coordinator, Wikimedia TN User Group
Member, WikiResearch Tunisia
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
Founder, TunSci
____________________
+21629499418
Dear all,
I thank you for your efforts. It was a long time since I began working on the issue of Arabic dialects in wikis. This issue is complicated as it has many significant sides such as the linguistic side and the sociopolitical side. In 2008, Language Committee approved the creation of Egyptian Wikipedia. Nowadays, many of the articles of this Wikipedia are written in Modern Standard Arabic and not in Egyptian Arabic. Moreover, most of the articles written in Egyptian Wikipedia are stubs. This can be explained by the difficulty of writing Egyptian and finding technical terms in this language as well as many other Arabic dialects due to the lack of standardization. In 2017, I proposed in AICCSA 2017 conference (ERA C Class Conference) the that Wikidata can be used a knowledgebase for Arabic dialects (https://www.researchgate.net/publication/321039195_Using_WikiData_as_a_Mult…). In fact, we can be simply done by adding labels in Arabic dialects to all Wikidata entities and we will not face the matters faced by active Egyptian Wikipedia editors as we do not need to formulate statements into sentences to put them in Wikidata. Furthermore, the database can be useful for a variety of other important tasks such as Named Entity Recognition. All what we should do is to translate Mediawiki system messages to Arabic dialects and add labels, descriptions and aliases in Arabic dialects. Since 2017, I worked to add support to Tunisian Arabic in Wikidata so that I can convince users of the accuracy of this solution. As you can know, Tunisian is represented in Mediawiki with three language codes: aeb, aeb-arab (Arabic Script) and aeb-latn (Latin Script: Arabizi). I added labels, descriptions and aliases in aeb-arab and aeb-latn. However, aeb remained without labels. I ask if you can add a module to Wikidata that adds any label, description or alias of aeb-arab to aeb. I also ask about your opinions about this Wikidata project.
Yours Sincerely,
Houcemeddine Turki (he/him)
Medical Student, Faculty of Medicine of Sfax, University of Sfax, Tunisia
Undergraduate Researcher, UR12SP36
GLAM and Education Coordinator, Wikimedia TN User Group
Member, WikiResearch Tunisia
Member, Wiki Project Med
Member, WikiIndaba Steering Committee
Member, Wikimedia and Library User Group Steering Committee
Co-Founder, WikiLingua Maghreb
Founder, TunSci
____________________
+21629499418
Hi Team,
I have been part of the list for quite a long time now - but have made no
contribution. To be honest, I never really understood the content of the
email.
Requesting your or the team to kindly unsubscribe me from the mailing list.
Thanks,
Kinan
------------------------------------------------------------------------------
CALL FOR CONTRIBUTIONS
THE SUBMISSION DEADLINE IS ON JUNE 28TH, 2019
------------------------------------------------------------------------------
The Fourteenth International Workshop on
ONTOLOGY MATCHING
(OM-2019)
http://om2019.ontologymatching.org/
October 26th or 27th, 2019, ISWC Workshop Program,
Auckland, New Zealand
BRIEF DESCRIPTION AND OBJECTIVES
Ontology matching is a key interoperability enabler for the Semantic Web,
as well as a useful technique in some classical data integration tasks
dealing with the semantic heterogeneity problem. It takes ontologies
as input and determines as output an alignment, that is, a set of
correspondences between the semantically related entities of those
ontologies.
These correspondences can be used for various tasks, such as ontology
merging, data interlinking, query answering or process mapping.
Thus, matching ontologies enables the knowledge and data expressed
with the matched ontologies to interoperate.
The workshop has three goals:
1.
To bring together leaders from academia, industry and user institutions
to assess how academic advances are addressing real-world requirements.
The workshop will strive to improve academic awareness of industrial
and final user needs, and therefore, direct research towards those needs.
Simultaneously, the workshop will serve to inform industry and user
representatives about existing research efforts that may meet their
requirements. The workshop will also investigate how the ontology
matching technology is going to evolve, especially with respect to
data interlinking, process mapping and web table matching tasks.
2.
To conduct an extensive and rigorous evaluation of ontology matching
and instance matching (link discovery) approaches through
the OAEI (Ontology Alignment Evaluation Initiative) 2019 campaign:
http://oaei.ontologymatching.org/2019/
3. To examine new uses, similarities and differences from database
schema matching, which has received decades of attention
but is just beginning to transition to mainstream tools.
This year, in sync with the main conference, we encourage submissions
specifically devoted to: (i) datasets, benchmarks and replication studies,
services, software, methodologies, protocols and measures
(not necessarily related to OAEI), and (ii) application of
the matching technology in real-life scenarios and assessment
of its usefulness to the final users.
TOPICS of interest include but are not limited to:
Business and use cases for matching (e.g., big, open, closed data);
Requirements to matching from specific application scenarios (e.g.,
public sector, homeland security);
Application of matching techniques in real-world scenarios (e.g., with
environmental data);
Formal foundations and frameworks for matching;
Matching and knowledge graphs;
Matching and deep learning;
Matching and embeddings;
Matching and big data;
Matching and linked data;
Instance matching, data interlinking and relations between them;
Privacy-aware matching;
Process model matching;
Large-scale and efficient matching techniques;
Matcher selection, combination and tuning;
User involvement (including both technical and organizational aspects);
Explanations in matching;
Social and collaborative matching;
Uncertainty in matching;
Reasoning with alignments;
Alignment coherence and debugging;
Alignment management;
Matching for traditional applications (e.g., data science);
Matching for emerging applications (e.g., web tables, knowledge graphs).
SUBMISSIONS
Contributions to the workshop can be made in terms of technical papers and
posters/statements of interest addressing different issues of ontology
matching
as well as participating in the OAEI 2019 campaign. Long technical papers
should
be of max. 12 pages. Short technical papers should be of max. 5 pages.
Posters/statements of interest should not exceed 2 pages.
All contributions have to be prepared using the LNCS Style:
http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0
and should be submitted in PDF format (no later than June 28th, 2019)
through the workshop submission site at:
https://www.easychair.org/conferences/?conf=om2019
Contributors to the OAEI 2019 campaign have to follow the campaign
conditions
and schedule at http://oaei.ontologymatching.org/2019/.
DATES FOR TECHNICAL PAPERS AND POSTERS:
June 28th, 2019: Deadline for the submission of papers.
July 24th, 2019: Deadline for the notification of acceptance/rejection.
August 26th, 2019: Workshop camera ready copy submission.
October 26th or 27th, 2019: OM-2019, Auckland, New Zealand.
Contributions will be refereed by the Program Committee.
Accepted papers will be published in the workshop proceedings as a volume
of CEUR-WS
as well as indexed on DBLP.
ORGANIZING COMMITTEE
1. Pavel Shvaiko (main contact)
Trentino Digitale, Italy
2. Jérôme Euzenat
INRIA & Univ. Grenoble Alpes, France
3. Ernesto Jiménez-Ruiz
The Alan Turing Institute, UK & University of Oslo, Norway
4. Oktie Hassanzadeh
IBM Research, USA
5.Cássia Trojahn
IRIT, France
PROGRAM COMMITTEE:
Alsayed Algergawy, Jena University, Germany
Manuel Atencia, INRIA & Univ. Grenoble Alpes, France
Zohra Bellahsene, LIRMM, France
Jiaoyan Chen, University of Oxford, UK
Valerie Cross, Miami University, USA
Jérôme David, University Grenoble Alpes & INRIA, France
Gayo Diallo, University of Bordeaux, France
Warith Eddine Djeddi, LIPAH & LABGED, Tunisia
AnHai Doan, University of Wisconsin, USA
Alfio Ferrara, University of Milan, Italy
Marko Gulic, University of Rijeka, Croatia
Wei Hu, Nanjing University, China
Ryutaro Ichise, National Institute of Informatics, Japan
Antoine Isaac, Vrije Universiteit Amsterdam & Europeana, Netherlands
Simon Kocbek, University of Melbourne, Australia
Prodromos Kolyvakis, EPFL, Switzerland
Patrick Lambrix, Linköpings Universitet, Sweden
Oliver Lehmberg, University of Mannheim, Germany
Vincenzo Maltese, University of Trento, Italy
Fiona McNeill, University of Edinburgh, UK
Christian Meilicke, University of Mannheim, Germany
Peter Mork, MITRE, USA
Andriy Nikolov, Metaphacts GmbH, Germany
Axel Ngonga, University of Paderborn, Germany
George Papadakis, University of Athens, Greece
Catia Pesquita, University of Lisbon, Portugal
Henry Rosales-Méndez, University of Chile, Chile
Juan Sequeda, Capsenta, USA
Kavitha Srinivas, IBM, USA
Giorgos Stoilos, National Technical University of Athens, Greece
Pedro Szekely, University of Southern California, USA
Valentina Tamma, University of Liverpool, UK
Ludger van Elst, DFKI, Germany
Xingsi Xue, Fujian University of Technology, China
Ondrej Zamazal, Prague University of Economics, Czech Republic
Songmao Zhang, Chinese Academy of Sciences, China
-------------------------------------------------------
More about ontology matching:
http://www.ontologymatching.org/http://book.ontologymatching.org/
-------------------------------------------------------
Best Regards,
Pavel
-------------------------------------------------------
Pavel Shvaiko, PhD
Trentino Digitale, Italy
http://www.ontologymatching.org/https://www.trentinodigitale.it/http://www.dit.unitn.it/~pavel
--
Cap. Soc. Euro 6.433.680,00 - REG. IMP. / C.F. / P.IVA 00990320228
E-mail:
tndigit(a)tndigit.it <mailto:infotn@infotn.it> - www.trentinodigitale.it
<http://www.infotn.it>
Società soggetta ad attività di direzione e
coordinamento da parte della Provincia Autonoma di Trento - C.Fisc.
00337460224.
Questo messaggio è indirizzato esclusivamente ai destinatari
in intestazione, può contenere informazioni protette e riservate ai sensi
della normativa vigente e ne è vietato qualsiasi impiego diverso da quello
per cui è stato inviato. Se lo avete ricevuto per errore siete pregati di
eliminarlo in ogni sua parte e di avvisare il mittente
Hello all,
As you may know, the *WikidataCon 2019
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2019>* will take place
on October 25th-26th 2019 in Berlin. The conference will host 250 people
from the Wikidata community, but also the emerging Wikibase community, as
well as partners, organizations and companies the organizations who may be
interested in *using Wikidata and Wikibase*, or contributing in various
ways to the evolution of Wikidata. The event will be focused on networking
and strategic discussions around Wikidata and Wikibase, with a special
focus on *languages and Wikidata*.
Because the number of seats is limited and in order to make sure that the
access to the conference is fair, we decided to set up a selection process.
People who wish to attend the conference can now apply by filling in the
application form
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2019/Attend/Apply>. A
committee equally composed of volunteers and staff will evaluate the
applications against criteria that are already published onwiki
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2019/Attend>.
This application form aims to gather all of the useful information at once,
that’s why it also contains the program submissions (limited to 3 per
person) and the scholarship application if needed. You can find more
information about the content of the form on this page
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2019/Attend/Apply>.
Part of the attendees will be also invited directly by the organizing team,
based on the connections we want to build with organizations during this
conference, and the input they can bring to Wikidata’s strategy.
The deadline to fill out the application form is April 26th. No application
can be considered after this date. Over the following month, the committee
will evaluate the applications, and the applicants will receive an answer
latest on June 12th.
Here are a few tips to help you prepare your application:
- Read the description of the conference
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2019> and the page
describing the process and the criteria
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2019/Attend>
carefully.
- Plan 20 to 30 minutes to fill out the form.
- Don’t do it at the last minute :) it will be less stressful for you,
as well as for the committee.
- Prepare all your program submissions, as they will be requested in the
same form (no submission can be done later). On this page
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2019/Program> you
can check the formats and questions.
- Read the Friendly Space Policy
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2019/Attend/Policy>
that will apply to the entire event.
- Look at the visa information
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2019/Attend/Visa> to
check if you need a visa to Germany and what actions you need to take.
You can find a lot of information regarding the conference on
Wikidata:WikidataCon
2019 <https://www.wikidata.org/wiki/Wikidata:WikidataCon_2019>. If you have
any question or issue, feel free to share it on the talk page onwiki or to
reach the organizing team at info(a)wikidatacon.org
Finally, feel free to share this message with your local community,
networks, or projects you’re working in! We want to make sure that everyone
who’s involved in Wikidata or Wikibase has a chance to apply.
We're looking forward to receiving your applications!
Thanks for your attention,
--
Léa Lacroix
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.