Hi Community!
We often act like and think that things are, well ... forever (even our own
lives!). But Time waits for no one. So...
I had posted this over in the LD4 Slack channel but thought that this would
be good for folks here to at least always be aware of and think about in
our growing Linked Data world.
----
All Linked Data efforts need stable identifiers (on both ends of a
"link"). I.E. linking is only good if the other side will be retrievable
and available (online or offline through web/archives/files) throughout the
expected lifetime of an effort and beyond. Think closely about "knowledge
retention" (libraries/books hold knowledge for hundreds of years!) and what
the Linked Data lifecycle itself that ideally will do that for your
projects. Then look towards not the tools, but instead the communities
that are well established and have the likelihood to continue to provide
stable identifiers that are retrievable well into the future + another 100+
years. This might include government efforts, or communities that have
foundations behind them that are well grounded through philanthropic means
with perpetuity ... to avoid link rot or non-retrievability through
complete void of the knowledge or stable identifiers in the future.
I'm hopeful that communities will think about data retention policies and
generally "Linked Data Availability" much more deeply and seriously. This
could be likened to something like GitHub's Arctic Vault, Internet Archive,
or decentralized storage solutions like Filecoin, to be able to backup and
retain the knowledge for thousands of years, if need be.
---
Thad
https://www.linkedin.com/in/thadguidry/https://calendly.com/thadguidry/
Please find the second call for papers for the Wikidata workshop below. I
am looking forward to reading your work related to Wikidata and seeing some
of you there!
The Second Wikidata Workshop
Co-located with the 20th International Conference on Semantic Web (ISWC
2021).
Date: October 24 or 25, 2021
The workshop will be held online, afternoon European time.
Website: https://wikidataworkshop.github.io/2021/
== Important dates ==
Papers due: Friday, July 30, 2021
Notification of accepted papers: Friday, September 24, 2021
Camera-ready papers due: Monday, October 4, 2021
Workshop date: October 24/25, 2021
== Overview ==
Wikidata is an openly available knowledge base, hosted by the Wikimedia
Foundation. It can be accessed and edited by both humans and machines and
acts as a common structured-data repository for several Wikimedia projects,
including Wikipedia, Wiktionary, and Wikisource. It is used in a variety of
applications by researchers and practitioners alike.
In recent years, we have seen an increase in the number of publications
around Wikidata. While there are several dedicated venues for the broader
Wikidata community to meet, none of them focuses on publishing original,
peer-reviewed research. This workshop fills this gap - we hope to provide a
forum to build this fledgling scientific community and promote novel work
and resources that support it.
The workshop seeks original contributions that address the opportunities
and challenges of creating, contributing to, and using a global,
collaborative, open-domain, multilingual knowledge graph such as Wikidata.
We encourage a range of submissions, including novel research, opinion
pieces, and descriptions of systems and resources, which are naturally
linked to Wikidata and its ecosystem or enabled by it. What we’re less
interested in are works that use Wikidata alongside or in lieu of other
resources to carry out some computational task - unless the work feeds back
into the Wikidata ecosystem, for instance by improving or commenting on
some Wikidata aspect, or suggesting new design features, tools, and
practices.
We also encourage submissions on the topic of Abstract Wikipedia,
particularly around collaborative code management, natural language
generation by a community, the abstract representation of knowledge, and
the interaction between Abstract Wikipedia and Wikidata on the one, and
Abstract Wikipedia and the language Wikipedias on the other side.
We welcome interdisciplinary work, as well as interesting applications that
shed light on the benefits of Wikidata and discuss areas of improvement.
The workshop is planned as an interactive half-day event, in which most of
the time will be dedicated to discussions and exchange rather than oral
presentations. For this reason, all accepted papers will be presented in
short talks and accompanied by a poster. All works will be presented
online.
== Topics ==
Topics of submissions include, but are not limited to:
- Data quality and vandalism detection in Wikidata
- Referencing in Wikidata
- Anomaly, bias, or novelty detection in Wikidata
- Algorithms for aligning Wikidata with other knowledge graphs
- The Semantic Web and Wikidata
- Community interaction in Wikidata
- Multilingual aspects in Wikidata
- Machine learning approaches to improve data quality in Wikidata
- Tools, bots, and datasets for improving or evaluating Wikidata
- Participation, diversity, and inclusivity aspects in the Wikidata
ecosystem
- Human-bot interaction
- Managing knowledge evolution in Wikidata
- Abstract Wikipedia
== Submission guidelines ==
We welcome the following types of contributions.
- Full research paper: Novel research contributions (7-12 pages)
- Short research paper: Novel research contributions of smaller scope than
full papers (3-6 pages)
- Position paper: Well-argued ideas and opinion pieces, not yet in the
scope of a research contribution (6-8 pages)
- Resource paper: New dataset or other resources directly relevant to
Wikidata, including the publication of that resource (8-12 pages)
- Demo paper: New system critically enabled by Wikidata (6-8 pages)
Submissions must be as PDF or HTML, formatted in the style of the Springer
Publications format for Lecture Notes in Computer Science (LNCS). For
details on the LNCS style, see Springer’s Author Instructions.
The papers will be peer-reviewed by at least three researchers. Accepted
papers will be published as open access papers on CEUR (we will only
publish to CEUR if the authors agree to have their papers published).
Papers have to be submitted through easychair:
https://easychair.org/conferences/?conf=wikidata21
== Proceedings ==
The complete set of papers will be published with the CEUR Workshop
Proceedings (CEUR-WS.org).
== Organizing committee ==
Lucie-Aimée Kaffee, University of Southampton, lucie.kaffee[[(a)]]gmail.com
Simon Razniewski, Max Planck Institute for Informatics, srazniew[[@]]
mpi-inf.mpg.de
Aidan Hogan, University of Chile, ahogan[[(a)]]dcc.uchile.cl
== Programme committee ==
Miriam Redi, Wikimedia Foundation
John Samuel, CPE Lyon
Dennis Diefenbach, University Jean Monet
Lydia Pintscher, Wikimedia Deutschland
Edgar Meij, Bloomberg L.P.
Thomas Pellissier Tanon, Lexistems
Hiba Arnaout, MPI for Informatics
Fabian Suchanek, Télécom ParisTech
Filip Ilievski, ISI
Marco Ponza, Bloomberg L.P.
Heiko Paulheim, University of Mannheim
Cristina Sarasua, University of Zurich
Pavlos Vougiouklis, Huawei Technologies, Edinburgh
Finn Årup Nielsen, Technical University of Denmark
Andrew D. Gordon, Microsoft Research & University of Edinburgh
--
Lucie-Aimée Kaffee
20th International Semantic Web Conference (ISWC 2021)
Virtual, October 24-28, 2021
https://iswc2021.semanticweb.org
Call for Posters, Demos, and Lightning Talks
*******************************************
The ISWC 2021 Posters and Demos Track complements the paper tracks of the
conference by offering an opportunity to present late-breaking research
results, on-going research or resource projects, and speculative or
innovative work in progress. The Lightning Talks track will open a few
weeks before ISWC 2021 takes place. We invite submissions relevant to the
area of the Semantic Web and which address, but are not limited to, the
topics of the Research Track, the Resources Track, the In-Use Track, and
the Industry Track. We also invite Visionary ideas, Position statements,
Negative results, Outrageous ideas, Novel but thoughtful speculation, and
Humorous thoughts (or similar).
Track details: https://iswc2021.semanticweb.org/posters-demos
Track chairs:
- Catia Pesquita (Universidade de Lisboa, Portugal)
- Oshani Seneviratne (Rensselaer Polytechnic Institute, NY, USA)
Contact: iswc2021-poster-demo(a)easychair.org
***** Important Dates *****
- Papers due: 5 July 2021
- Notifications: 28 July 2021
- Camera-ready paper due: 1 September 2021
*** All deadlines are AoE (Anywhere on Earth) ***
Submission link for all papers:
https://easychair.org/conferences/?conf=iswc2021
Follow ISWC on social media:
- Twitter: @iswc_conf #iswc_conf (https://twitter.com/iswc_conf)
- LinkedIn: https://www.linkedin.com/groups/13612370
- Facebook: https://www.facebook.com/ISWConf
The ISWC 2021 Organising Team
https://iswc2021.semanticweb.org/organizing-committee
Hello all,
We are happy to announce that a new tool <https://bodh.toolforge.org/>,
named Bodh, has been developed by Jayprakash, <User:Jay (CIS-A2K)> as a
CIS-A2K assignment, to add or modify statements for lexemes, senses and
forms. This tool presents lexicographical data in tabular format generated
from SPARQL queries or from manual lists. Users can switch whether to work
on lexeme proper, sense or form and add or modify statements there.
The idea of this tool was inspired by the one and only Magnus Manske's
<https://www.wikidata.org/wiki/Q13520818> Tabernacle
<https://tabernacle.toolforge.org/#/> which helps editors to add or modify
statements, labels, descriptions and aliases for Wikidata's entity data.
The tool is going to be documented here
<https://www.wikidata.org/wiki/Wikidata:Bodh>. Users of this tool can
request for more features or report for bugs using this phabricator project
profile <https://phabricator.wikimedia.org/project/profile/5166/>.
Happy editing!
Regards,
Bodhisattwa
Wikidata Co-ordinator, CIS-A2K
Dear all,
thank you, Dragan for the answer, yet this is nothing I can explain to "normal users". They use the Query Service with the recommended input help and they do get inexplicable January 1st dates (which, if the are not suspicious, they will accept as the new knowledge).
What would speak against a a Query helper that gives data pretty much as we have entered them?
Best,
Olaf
> Dragan Espenschied <dragan.espenschied(a)rhizome.org> hat am 23.06.2021 19:53 geschrieben:
>
>
> Hello Olaf,
>
> The trick is to query for the property value node instead of the
> proterty direct value.
>
> Example in ArtBase:
>
> https://tinyurl.com/yft9u5kg
> On Wikidata, you can replace
> "rt:" --> "wdt:"
> "rp:" --> "p:"
> "rpsv:" --> "psv:"
>
> With the "property direct" ("rt") you query for just the value of the
> property.
>
> With "property" ("rp") you query for the node that holds the value, and
> more meta-information about the value.
>
> The node contains the "node value" ("rpsv"), and that value has a
> precision value ("wikibase:timePrecision"). 9 is year level, 10 is
> month level, 11 is day level.
>
> Hope that helps! :)
>
>
>
> select ?artwork ?inception ?date_precision WHERE {
> ?artwork rt:P3 r:Q5 .
> ?artwork rt:P26 ?inception .
>
> ?artwork rp:P26 ?inception_node .
> ?inception_node rpsv:P26 ?inception_value .
> ?inception_value wikibase:timePrecision ?date_precision .
> }
> LIMIT 100
>
>
>
>
>
>
> --
> Dragan Espenschied
> Preservation Director
> Rhizome at the New Museum
>
> On Mi, Jun 23 2021 at 11:50:12 +0200, Olaf Simons
> <olaf.simons(a)pierre-marteau.com> wrote:
> > Dear All,
> >
> > I wonder whether there is a simple way to retrieve dates with the
> > precision in they have been put into a wikibase.
> >
> > Using SPARQL I get all years such as "1749" as "1 January 1749"
> > statements, no matter whether the person is born that day or not.
> > Should I run different searches?
> >
> > Best,
> > Olaf
> >
> >
> >
> >
> >
> > Dr. Olaf Simons
> > Forschungszentrum Gotha der Universität Erfurt
> > Am Schlossberg 2
> > 99867 Gotha
> > Büro: +49-361-737-1722
> > Mobil: +49-179-5196880
> > Privat: Hauptmarkt 17b/ 99867 Gotha
> > _______________________________________________
> > Wikibaseug mailing list -- wikibaseug(a)lists.wikimedia.org
> > To unsubscribe send an email to wikibaseug-leave(a)lists.wikimedia.org
>
> _______________________________________________
> Wikibaseug mailing list -- wikibaseug(a)lists.wikimedia.org
> To unsubscribe send an email to wikibaseug-leave(a)lists.wikimedia.org
Dr. Olaf Simons
Forschungszentrum Gotha der Universität Erfurt
Am Schlossberg 2
99867 Gotha
Büro: +49-361-737-1722
Mobil: +49-179-5196880
Privat: Hauptmarkt 17b/ 99867 Gotha
------------------------------------------------------------------------------
SECOND CALL FOR CONTRIBUTIONS
THE SUBMISSION DEADLINE IS ON AUGUST 9TH, 2021
------------------------------------------------------------------------------
The Sixteenth International Workshop on
ONTOLOGY MATCHING
(OM-2021)
http://om2021.ontologymatching.org/
October 25th, 2021,
International Semantic Web Conference (ISWC) Workshop Program,
VIRTUAL CONFERENCE
BRIEF DESCRIPTION AND OBJECTIVES
Ontology matching is a key interoperability enabler for the Semantic Web,
as well as a useful technique in some classical data integration tasks
dealing with the semantic heterogeneity problem. It takes ontologies
as input and determines as output an alignment, that is, a set of
correspondences between the semantically related entities of those
ontologies.
These correspondences can be used for various tasks, such as ontology
merging, data interlinking, query answering or navigation over knowledge
graphs.
Thus, matching ontologies enables the knowledge and data expressed
with the matched ontologies to interoperate.
The workshop has three goals:
1.
To bring together leaders from academia, industry and user institutions
to assess how academic advances are addressing real-world requirements.
The workshop will strive to improve academic awareness of industrial
and final user needs, and therefore, direct research towards those needs.
Simultaneously, the workshop will serve to inform industry and user
representatives about existing research efforts that may meet their
requirements. The workshop will also investigate how the ontology
matching technology is going to evolve, especially with respect to
data interlinking, knowledge graph and web table matching tasks.
2.
To conduct an extensive and rigorous evaluation of ontology matching
and instance matching (link discovery) approaches through
the OAEI (Ontology Alignment Evaluation Initiative) 2021 campaign:
http://oaei.ontologymatching.org/2021/
3.
To examine similarities and differences from other, old, new and emerging,
techniques and usages, such as web table matching or knowledge embeddings.
This year, in sync with the main conference, we encourage submissions
specifically devoted to: (i) datasets, benchmarks and replication studies,
services, software, methodologies, protocols and measures
(not necessarily related to OAEI), and (ii) application of
the matching technology in real-life scenarios and assessment
of its usefulness to the final users.
TOPICS of interest include but are not limited to:
Business and use cases for matching (e.g., big, open, closed data);
Requirements to matching from specific application scenarios (e.g.,
public sector, homeland security);
Application of matching techniques in real-world scenarios (e.g., in
cloud, with mobile apps);
Formal foundations and frameworks for matching;
Novel matching methods, including link prediction, ontology-based
access;
Matching and knowledge graphs;
Matching and deep learning;
Matching and embeddings;
Matching and big data;
Matching and linked data;
Instance matching, data interlinking and relations between them;
Privacy-aware matching;
Process model matching;
Large-scale and efficient matching techniques;
Matcher selection, combination and tuning;
User involvement (including both technical and organizational aspects);
Explanations in matching;
Social and collaborative matching;
Uncertainty in matching;
Expressive alignments;
Reasoning with alignments;
Alignment coherence and debugging;
Alignment management;
Matching for traditional applications (e.g., data science);
Matching for emerging applications (e.g., web tables, knowledge graphs).
SUBMISSIONS
Contributions to the workshop can be made in terms of technical papers and
posters/statements of interest addressing different issues of ontology
matching
as well as participating in the OAEI 2021 campaign. Long technical papers
should
be of max. 12 pages. Short technical papers should be of max. 5 pages.
Posters/statements of interest should not exceed 2 pages.
All contributions have to be prepared using the LNCS Style:
http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0
and should be submitted in PDF format (no later than August 9th, 2021)
through the workshop submission site at:
https://www.easychair.org/conferences/?conf=om2021
Contributors to the OAEI 2021 campaign have to follow the campaign
conditions
and schedule at http://oaei.ontologymatching.org/2021/.
DATES FOR TECHNICAL PAPERS AND POSTERS:
August 9th, 2021: Deadline for the submission of papers.
September 6th, 2021: Deadline for the notification of
acceptance/rejection.
September 20th, 2021: Workshop camera ready copy submission.
October 25th, 2021: OM-2021, Virtual Conference.
Contributions will be refereed by the Program Committee.
Accepted papers will be published in the workshop proceedings as a volume
of CEUR-WS as well as indexed on DBLP.
ORGANIZING COMMITTEE
1. Pavel Shvaiko (main contact)
Trentino Digitale, Italy
2. Jérôme Euzenat
INRIA & Univ. Grenoble Alpes, France
3. Ernesto Jiménez-Ruiz
City, University of London, UK & SIRIUS, University of Oslo, Norway
4. Oktie Hassanzadeh
IBM Research, USA
5. Cássia Trojahn
IRIT, France
PROGRAM COMMITTEE (to be completed):
Alsayed Algergawy, Jena University, Germany
Manuel Atencia, INRIA & Univ. Grenoble Alpes, France
Zohra Bellahsene, LIRMM, France
Jiaoyan Chen, University of Oxford, UK
Valerie Cross, Miami University, USA
Jérôme David, University Grenoble Alpes & INRIA, France
Gayo Diallo, University of Bordeaux, France
Daniel Faria, Instituto Gulbenkian de Ciéncia, Portugal
Alfio Ferrara, University of Milan, Italy
Marko Gulic, University of Rijeka, Croatia
Wei Hu, Nanjing University, China
Ryutaro Ichise, National Institute of Informatics, Japan
Antoine Isaac, Vrije Universiteit Amsterdam & Europeana, Netherlands
Naouel Karam, Fraunhofer, Germany
Prodromos Kolyvakis, EPFL, Switzerland
Patrick Lambrix, Linköpings Universitet, Sweden
Oliver Lehmberg, University of Mannheim, Germany
Fiona McNeill, Heriot Watt University, UK
Peter Mork, MITRE, USA
Axel Ngonga, University of Paderborn, Germany
George Papadakis, University of Athens, Greece
Catia Pesquita, University of Lisbon, Portugal
Henry Rosales-Méndez, University of Chile, Chile
Kavitha Srinivas, IBM, USA
Pedro Szekely, University of Southern California, USA
Valentina Tamma, University of Liverpool, UK
Ludger van Elst, DFKI, Germany
Xingsi Xue, Fujian University of Technology, China
Ondrej Zamazal, Prague University of Economics, Czech Republic
-------------------------------------------------------
More about ontology matching:
http://www.ontologymatching.org/http://book.ontologymatching.org/
-------------------------------------------------------
Best Regards,
Pavel
-------------------------------------------------------
Pavel Shvaiko, PhD
Trentino Digitale, Italy
http://www.ontologymatching.org/https://www.trentinodigitale.it/http://www.dit.unitn.it/~pavel
--
Cap. Soc. Euro 6.433.680,00 - REG. IMP. / C.F. / P.IVA 00990320228
E-mail:
tndigit(a)tndigit.it <mailto:tndigit@tndigit.it> - www.trentinodigitale.it
<http://www.trentinodigitale.it>
Società soggetta ad attività di direzione
e coordinamento da parte della Provincia Autonoma di Trento - C.Fisc.
00337460224.
Questo messaggio è indirizzato esclusivamente ai destinatari
in intestazione, può contenere informazioni protette e riservate ai sensi
della normativa vigente e ne è vietato qualsiasi impiego diverso da quello
per cui è stato inviato. Se lo avete ricevuto per errore siete pregati di
eliminarlo in ogni sua parte e di avvisare il mittente
Dear All,
I wonder whether there is a simple way to retrieve dates with the precision in they have been put into a wikibase.
Using SPARQL I get all years such as "1749" as "1 January 1749" statements, no matter whether the person is born that day or not. Should I run different searches?
Best,
Olaf
Dr. Olaf Simons
Forschungszentrum Gotha der Universität Erfurt
Am Schlossberg 2
99867 Gotha
Büro: +49-361-737-1722
Mobil: +49-179-5196880
Privat: Hauptmarkt 17b/ 99867 Gotha
Hello everyone,
This is to announce that we have finalized the concepts and started
development for a tool to help editors work on mismatches between
Wikidata's data and other databases/websites.
Why are we doing this?
Wikidata is becoming too big for the editors to monitor individual data
points. Additionally, keeping Wikidata’s data in sync with the external
database (meant in the broadest way possible) requires a lot of effort and
existing workflows are haphazard and one-off.
Who are we doing this for?
The target audience of this tool are tech-savvy editors whose primary goal
is finding mismatches and data quality improvement. They are:
-
Dedicated data quality workers who specifically seeks out lists of
issues in an area of their interest to fix a number of mistakes
-
Heavily affected by the data quality
-
Experienced using bots and mass editing gadgets
What is the solution?
We will build a system that will have a store for mismatches. Different
people and organizations can load mismatches they found into this system.
Various tools can then get mismatches from the system to help editors
resolve them.
Sources for the mismatches can be many. We will start with mismatches that
we found as part of previous work to find references for statements lacking
references (aka Reference Treasure Hunt
<https://www.wikidata.org/wiki/Wikidata:Automated_finding_references_input>).
In the future, categories on Wikipedia that indicate a mismatch between the
local value on that Wikipedia and the corresponding value on Wikidata could
also be possible. Various research organizations as well as large data
re-users could also contribute mismatches they found in their internal
processes when doing quality assurance on Wikidata’s data.
We hope that this tool will help to make it easier for editors to find and
fix the mismatches between Wikidata’s data and other databases.
Feel free to ask questions or give us feedback at the discussion page:
Wikidata_talk:Mismatch_Finder
<https://www.wikidata.org/wiki/Wikidata_talk:Mismatch_Finder>
Cheers,
--
Mohammed Sadat
*Community Communications Manager for Wikidata/Wikibase*
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Phone: +49 (0)30 219 158 26-0
https://wikimedia.de
Keep up to date! Current news and exciting stories about Wikimedia,
Wikipedia and Free Knowledge in our newsletter (in German): Subscribe now
<https://www.wikimedia.de/newsletter/>.
Imagine a world in which every single human being can freely share in the
sum of all knowledge. Help us to achieve our vision!
https://spenden.wikimedia.de
Wikimedia Deutschland – Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.