[To all Wikimedia projects, minus mailing lists with an active
discussion already.]
On August 2020, the Wikimedia Foundation board of trustees may decide a
rename to "Wikipedia Foundation" and various other things.
<https://meta.wikimedia.org/wiki/Wikimedia_Foundation_Board_noticeboard/Boar…>
Following a community meeting, a proposed open letter was written:
<https://meta.wikimedia.org/wiki/Community_open_letter_on_renaming>
«We ask the Wikimedia Foundation to pause or stop its current movement
renaming activities, due to persistent shortcomings in the current
rebranding process. Future work should be restarted only in a way that
ensures equitable decision-making.»
(Sorry for the crossposting. When replying, be mindful of cc. Do
consider forwarding to language-specific discussion venues with a short
translated introduction, or translate the pages on Meta.)
Cheers,
Federico aka Nemo
We have released lexical masks as ShEx files before, schemata for
lexicographic forms that can be used to validate whether the data is
complete.
We saw that it was quite challenging to turn these ShEx files into forms
for entering the data, such as Lucas Werkmeister’s Lexeme Forms. So we
adapted our approach slightly to publish JSON files that keep the
structures in an easier to parse and understand format, and to also provide
a script that translates these JSON files into ShEx Entity Schemas.
Furthermore, we published more masks for more languages and parts of speech
than before.
Full documentation can be found on wiki:
https://www.wikidata.org/wiki/Wikidata:Lexical_Masks#Paper
Background can be found in the paper:
https://www.aclweb.org/anthology/2020.lrec-1.372/
Thanks Bruno, Saran, and Daniel for your great work!
Hello
(Please direct me somewhere else is this is not the place to ask.)
I am developing a bot to synch ISSN data [1] using WDTK. Creating a
label/alias with the language code 'mul' (to indicate a multilingual
value), or 'mis' (to indicate the language is unknown) does not work and
returns an Exception from the wikimedia API ("[not-recognized-language] The
supplied language code was not recognized"). These 2 codes are however
documented at https://www.wikidata.org/wiki/Help:Monolingual_text_languages
and listed at
https://www.wikidata.org/wiki/Help:Wikimedia_language_codes/lists/all
I also cannot use these 2 codes when editing through the human interface.
I do however find items in Wikidata with 'mul' language codes.
Am I doing something wrong, did I misunderstand something, or is there a
true problem with these codes ?
Also opened ticket in WDTK issue tracker :
https://github.com/Wikidata/Wikidata-Toolkit/issues/509
Thanks
Thomas
[1] : ISSN Bot :
https://www.wikidata.org/wiki/Wikidata_talk:WikiProject_Periodicals#Data_do…
--
*Thomas Francart* -* SPARNA*
Web de *données* | Architecture de l'*information* | Accès aux
*connaissances*
blog : blog.sparna.fr, site : sparna.fr, linkedin :
fr.linkedin.com/in/thomasfrancart
tel : +33 (0)6.71.11.25.97, skype : francartthomas
Hi all,
join the teams from Analytics and Research for their monthly office hours
next Wednesday, 2020-06-24 from 9.00-10.00am (UTC)*. Bring all your
research/analytics questions and ideas to discuss projects, data, analysis,
etc. To participate, please join the IRC channel: #wikimedia-research [1].
More detailed information can be found here [2].
Note the earlier starting time to previous meetings -- starting this month
we are experimenting with alternating time-slots from month to month to
provide different options for participation and accommodate a wider range
of timezones.
Looking forward to your participation,
Martin
[1] irc://chat.freenode.net:6667/wikimedia-research
[2] https://www.mediawiki.org/wiki/Wikimedia_Research/Office_hours
* find local times here:
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200624T09
--
Martin Gerlach
Research Scientist
Wikimedia Foundation
------------------------------------------------------------------------------
CALL FOR CONTRIBUTIONS
THE SUBMISSION DEADLINE IS ON AUGUST 10TH, 2020
------------------------------------------------------------------------------
The Fifteenth International Workshop on
ONTOLOGY MATCHING
(OM-2020)
http://om2020.ontologymatching.org/
November 2nd or 3rd, 2020,
International Semantic Web Conference (ISWC) Workshop Program,
VIRTUAL CONFERENCE
BRIEF DESCRIPTION AND OBJECTIVES
Ontology matching is a key interoperability enabler for the Semantic Web,
as well as a useful technique in some classical data integration tasks
dealing with the semantic heterogeneity problem. It takes ontologies
as input and determines as output an alignment, that is, a set of
correspondences between the semantically related entities of those
ontologies.
These correspondences can be used for various tasks, such as ontology
merging, data interlinking, query answering or navigation over knowledge
graphs.
Thus, matching ontologies enables the knowledge and data expressed
with the matched ontologies to interoperate.
The workshop has three goals:
1.
To bring together leaders from academia, industry and user institutions
to assess how academic advances are addressing real-world requirements.
The workshop will strive to improve academic awareness of industrial
and final user needs, and therefore, direct research towards those needs.
Simultaneously, the workshop will serve to inform industry and user
representatives about existing research efforts that may meet their
requirements. The workshop will also investigate how the ontology
matching technology is going to evolve, especially with respect to
data interlinking, knowledge graph and web table matching tasks.
2.
To conduct an extensive and rigorous evaluation of ontology matching
and instance matching (link discovery) approaches through
the OAEI (Ontology Alignment Evaluation Initiative) 2020 campaign:
http://oaei.ontologymatching.org/2020/
3.
To examine similarities and differences from other, old, new
and emerging, techniques and usages, such as web table matching
or knowledge embeddings.
This year, in sync with the main conference, we encourage submissions
specifically devoted to: (i) datasets, benchmarks and replication studies,
services, software, methodologies, protocols and measures
(not necessarily related to OAEI), and (ii) application of
the matching technology in real-life scenarios and assessment
of its usefulness to the final users.
TOPICS of interest include but are not limited to:
Business and use cases for matching (e.g., big, open, closed data);
Requirements to matching from specific application scenarios (e.g.,
public sector, homeland security);
Application of matching techniques in real-world scenarios (e.g., in
cloud, with mobile apps);
Formal foundations and frameworks for matching;
Novel matching methods, including link prediction, ontology-based
access;
Matching and knowledge graphs;
Matching and deep learning;
Matching and embeddings;
Matching and big data;
Matching and linked data;
Instance matching, data interlinking and relations between them;
Privacy-aware matching;
Process model matching;
Large-scale and efficient matching techniques;
Matcher selection, combination and tuning;
User involvement (including both technical and organizational aspects);
Explanations in matching;
Social and collaborative matching;
Uncertainty in matching;
Expressive alignments;
Reasoning with alignments;
Alignment coherence and debugging;
Alignment management;
Matching for traditional applications (e.g., data science);
Matching for emerging applications (e.g., web tables, knowledge graphs).
SUBMISSIONS
Contributions to the workshop can be made in terms of technical papers and
posters/statements of interest addressing different issues of ontology
matching
as well as participating in the OAEI 2020 campaign. Long technical papers
should
be of max. 12 pages. Short technical papers should be of max. 5 pages.
Posters/statements of interest should not exceed 2 pages.
All contributions have to be prepared using the LNCS Style:
http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0
and should be submitted in PDF format (no later than August 10th, 2020)
through the workshop submission site at:
https://www.easychair.org/conferences/?conf=om2020
Contributors to the OAEI 2020 campaign have to follow the campaign
conditions
and schedule at http://oaei.ontologymatching.org/2020/.
DATES FOR TECHNICAL PAPERS AND POSTERS:
August 10th, 2020: Deadline for the submission of papers.
September 11th, 2020: Deadline for the notification of
acceptance/rejection.
September 21st, 2020: Workshop camera ready copy submission.
November 2nd or 3rd, 2020: OM-2020, Virtual Conference.
Contributions will be refereed by the Program Committee.
Accepted papers will be published in the workshop proceedings
as a volume of CEUR-WS as well as indexed on DBLP.
ORGANIZING COMMITTEE
1. Pavel Shvaiko (main contact)
Trentino Digitale, Italy
2. Jérôme Euzenat
INRIA & Univ. Grenoble Alpes, France
3. Ernesto Jiménez-Ruiz
City, University of London, UK & SIRIUS, University of Oslo, Norway
4. Oktie Hassanzadeh
IBM Research, USA
5. Cássia Trojahn
IRIT, France
PROGRAM COMMITTEE (to be completed):
Alsayed Algergawy, Jena University, Germany
Manuel Atencia, INRIA & Univ. Grenoble Alpes, France
Zohra Bellahsene, LIRMM, France
Jiaoyan Chen, University of Oxford, UK
Valerie Cross, Miami University, USA
Jérôme David, University Grenoble Alpes & INRIA, France
Gayo Diallo, University of Bordeaux, France
Daniel Faria, Instituto Gulbenkian de Ciéncia, Portugal
Alfio Ferrara, University of Milan, Italy
Marko Gulic, University of Rijeka, Croatia
Wei Hu, Nanjing University, China
Ryutaro Ichise, National Institute of Informatics, Japan
Antoine Isaac, Vrije Universiteit Amsterdam & Europeana, Netherlands
Naouel Karam, Fraunhofer, Germany
Prodromos Kolyvakis, EPFL, Switzerland
Patrick Lambrix, Linköpings Universitet, Sweden
Oliver Lehmberg, University of Mannheim, Germany
Majeed Mohammadi, TU Delft, Netherlands
Peter Mork, MITRE, USA
Andriy Nikolov, Metaphacts GmbH, Germany
George Papadakis, University of Athens, Greece
Catia Pesquita, University of Lisbon, Portugal
Henry Rosales-Méndez, University of Chile, Chile
Kavitha Srinivas, IBM, USA
Giorgos Stoilos, Huawei Technologies, Greece
Pedro Szekely, University of Southern California, USA
Ludger van Elst, DFKI, Germany
Xingsi Xue, Fujian University of Technology, China
Ondrej Zamazal, Prague University of Economics, Czech Republic
Songmao Zhang, Chinese Academy of Sciences, China
-------------------------------------------------------
More about ontology matching:
http://www.ontologymatching.org/http://book.ontologymatching.org/
-------------------------------------------------------
Best Regards,
Pavel
-------------------------------------------------------
Pavel Shvaiko, PhD
Trentino Digitale, Italy
http://www.ontologymatching.org/https://www.trentinodigitale.it/http://www.dit.unitn.it/~pavel
--
Cap. Soc. Euro 6.433.680,00 - REG. IMP. / C.F. / P.IVA 00990320228
E-mail:
tndigit(a)tndigit.it <mailto:infotn@infotn.it> - www.trentinodigitale.it
<http://www.infotn.it>
Società soggetta ad attività di direzione e
coordinamento da parte della Provincia Autonoma di Trento - C.Fisc.
00337460224.
Questo messaggio è indirizzato esclusivamente ai destinatari
in intestazione, può contenere informazioni protette e riservate ai sensi
della normativa vigente e ne è vietato qualsiasi impiego diverso da quello
per cui è stato inviato. Se lo avete ricevuto per errore siete pregati di
eliminarlo in ogni sua parte e di avvisare il mittente
Hi All!
I'm trying to figure out possible ways to launch Mediawiki-Wikibase
software to allow collaborative creation of wiki pages and
corresponding knowledge graph.
As well as I understand, it is possible to configure a single installation
of Mediawiki with Wikibase extension, and have all pages in the Main
namespace like https://example.org/wiki/ and all graph items in the
namespace https://example.org/wiki/Item:
I want to see something more similar to Wikipedia-Wikidata -- wiki pages
in the namespace https://wiki.example.org/wiki/ and wikibase graph in the
namespace https://graph.example.org/wiki/ . Am I right that I have to
launch two instances of MediaWiki for that, one without Wikibase extension
and one with it?
Or is there a simpler way to configure the system to get such namespace
structure?
Thank you for help!
Victor Agroskin
As the last release of Python 2 is finally out, the July release of
Pywikibot is going to be the **last release that supports Python 2**.
Support of Python 3.4 and MediaWiki older than 1.19 is also going to be
dropped. After this release, Pywikibot is not going to receive any further
patches and bug fixes related to those Python and MediaWiki versions.
Functions and other stuff specific to Python 3.4, Python 2.x or MediaWiki
older than 1.19 will be removed.
For your convenience, this release is marked with a "python2"
git tag and it is also the last 3.0.x release. In case you really need it,
the Pywikibot team created /shared/pywikibot/core_python2 repository in
Toolforge and a python2-pywikibot package in software repositories of some
operating systems.
The Pywikibot team strongly recommends that you migrate your scripts from
Python 2 to Python 3. The migration steps were described in the previous
message, which can be found here:
https://lists.wikimedia.org/pipermail/pywikibot/2020-January/009976.html
Detailed plan of Python 2 deprecation with dates is described here:
https://www.mediawiki.org/wiki/Manual:Pywikibot/Compatibility
If you encounter any problems with the migration, you can always ask us
here: https://phabricator.wikimedia.org/T242120
Best regards,
Pywikibot team
*Apologies for cross-posting*
Hello all,
The Wikidata development team is currently doing some research to
understand better how people access and reuse Wikidata’s data from the code
of their applications and tools (for example through APIs), and how we can
improve the tools to make your workflows easier.
We are running a short survey to gather more information from people who
build tools based on Wikidata’s data. If you would like to participate,
please use this link
<https://docs.google.com/forms/d/e/1FAIpQLSfJ-I_Ib2EOuRVG4XfeUazhXTvgKsjcKhA…>
(Google Forms, estimated fill-in time 5min). If you don’t want to use
Google Forms, you can also send me a private email with your answers. We
would love to get as many answers as possible before June 9th.
The data will be anonymously collected and will only be shared in an
aggregated form.
If you have any questions, feel free to reach out to me directly.
Cheers,
--
Mohammed Sadat Abdulai
*Community Communications Manager for Wikidata/Wikibase*
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de