Call For Papers
1st International Workshop on Approaches for Making Data Interoperable (AMAR 2019)
https://events.tib.eu/amar2019/
co-located with SEMANTiCS 2019
September 09 – 12, 2019 Karlsruhe, Germany
------------------------------------------------------------------------------------------------------------------------
Overview
------------------------------------------------------------------------------------------------------------------------
Recently, there has been a rapid growth in the amount of data available on the Web. Data is produced by different communities working in a wide range of domains, using several techniques. This way a large volume of data in different formats and languages is generated. Accessibility of such heterogeneous and multilingual data becomes an obstacle for reuse due to the incompatibility of data formats and the language gap. This incompatibility of data formats impedes the accessibility of data sources to the right community. For instance, most of open domain question answering systems are developed to be effective when data is represented in RDF. They can not operate with data in the very common CSV files or presented in unstructured formats. Usually, the data they draw from is in English rendering them unable to answer questions e.g. in Spanish. On the other hand, NLP applications in Spanish cannot make use of a knowledge graph in English. Different communities have different requirements in terms of data representation and modeling. It is crucial to make the data interoperable to make it accessible for a variety of applications.
------------------------------------------------------------------------------------------------------------------------
Topics of Interest
------------------------------------------------------------------------------------------------------------------------
We invite paper submissions from two communities: (i) data consumers and (ii) data providers. This includes practitioners, such as data scientists, that have experience in fitting the data available to their use case; Semantic Web researchers, that have been investigating data reuse from heterogeneous data in tools; researchers in the field of data linking and translation; and other researchers working on the general field of data integration.
We invite submissions from the following communities:
-
Data Integration -
Multilingual Data -
Data Linking -
Ontology and Knowledge Engineering
We welcome original contributions about all topics related to data interoperability, including but not limited to:
-
Approaches to convert data between formats, languages, and schema -
Best practices for processing heterogeneous data -
Translation of different language data -
Cross-lingual applications -
Recommendations for language modeling in linked data -
Labeling of data with natural language information -
Datasets for different communities’ data needs -
Tools reusing different data formats -
Converting datasets between different formats -
Applications in different domains, e.g., Life Sciences, Scholarly, Industry 4.0, Humanities
------------------------------------------------------------------------------------------------------------------------
Author Instructions
------------------------------------------------------------------------------------------------------------------------
Paper submission this workshop will be via EasyChair ( https://easychair.org/conferences/?conf=amar2019). The papers should follow the Springer LNCS format, and be submitted in PDF on or before July 9, 2019 (midnight Hawaii time).
We accept papers of the following formats:
-
Full research papers (8 - 12 pages) -
Short research papers (3 - 5 pages) -
Position papers (6 - 8 pages) -
Resource papers (8 - 12 pages, including the publication of the dataset) -
In-Use papers (6 - 8 pages)
Accepted papers will be published as CEUR workshop proceedings. We target the creation of a special issue including the best papers of the workshop.
------------------------------------------------------------------------------------------------------------------------
Important Dates
------------------------------------------------------------------------------------------------------------------------
Submission: July 9, 2019 Notification: July 30, 2019 Workshop: September 9, 2019
------------------------------------------------------------------------------------------------------------------------
Workshop Organizers
------------------------------------------------------------------------------------------------------------------------
Lucie-Aimée Kaffee, University of Southampton, UK & TIB Leibniz Information Centre for Science and Technology, Hannover, Germany Kemele M. Endris, TIB Leibniz Information Centre for Science and Technology and L3S Research Centre University of Hannover, Germany Maria-Esther Vidal, TIB Leibniz Information Centre for Science and Technology and and L3S Research Centre University of Hannover, Germany
Please contact us, if you have any questions.
Hi Wikidata people,
I am trying to organise a QuickStatement Input for churches and people who held offices in these churches - on our Wikibase.
Using QuickStatements I get the problem when a person gets into a new position/qualification - as you can see here with "Friedrich Linde who was spends sone years as an assistant before he becomes a pastor:
https://database.factgrid.de/wiki/Item:Q43367
QuickStatements states all the positions and then all the years on different heaps - where I would love have the dates on the respective positions.
If I add the same man with a new position manually I get the trick done (but that is nothing I want to do with several thousand office terms).
May be my entire approach is stupid - any idea how I could sort this out better?
cheers, Olaf
PS my CSV Input (version 1 makes no difference)
qid,P299,qal166,qal49,qal50,qal106 Q43367,Q42672,Q36857,+1541-00-00T00:00:00Z/9,+1551-00-00T00:00:00Z/9 Q43367,Q43135,Q36857,+1554-00-00T00:00:00Z/9,+1594-00-00T00:00:00Z/9 Q43367,Q41734,Q43664,+1584-00-00T00:00:00Z/9,+1594-00-00T00:00:00Z/9 Q43367,Q41734,Q36857,+1594-00-00T00:00:00Z/9,+1631-00-00T00:00:00Z/9 Q43367,Q43199,Q43664,,,+1627-00-00T00:00:00Z/9
Dr. Olaf Simons Forschungszentrum Gotha der Universität Erfurt Schloss Friedenstein, Pagenhaus 99867 Gotha
Büro: +49-361-737-1722 Mobil: +49-179-5196880
Privat: Hauptmarkt 17b/ 99867 Gotha
Hi Olaf,
This is a known issue with QS, unfortunately - it's not able to distinguish between two statements with the same value.
On wikidata proper, you can use wikidata-cli for this (it is able to identify individual statements and attach qualifiers to them) but I don't know how easy it would be to port that over to use it on your site. The every-politician people also developed a script specifically for this sort of P39 situation (https://www.wikidata.org/wiki/User:PositionStatements_Bot) but I don't know if their code is available.
Andrew.
On Wed, 8 May 2019 at 10:15, Olaf Simons olaf.simons@pierre-marteau.com wrote:
Hi Wikidata people,
I am trying to organise a QuickStatement Input for churches and people who held offices in these churches - on our Wikibase.
Using QuickStatements I get the problem when a person gets into a new position/qualification - as you can see here with "Friedrich Linde who was spends sone years as an assistant before he becomes a pastor:
https://database.factgrid.de/wiki/Item:Q43367
QuickStatements states all the positions and then all the years on different heaps - where I would love have the dates on the respective positions.
If I add the same man with a new position manually I get the trick done (but that is nothing I want to do with several thousand office terms).
May be my entire approach is stupid - any idea how I could sort this out better?
cheers, Olaf
PS my CSV Input (version 1 makes no difference)
qid,P299,qal166,qal49,qal50,qal106 Q43367,Q42672,Q36857,+1541-00-00T00:00:00Z/9,+1551-00-00T00:00:00Z/9 Q43367,Q43135,Q36857,+1554-00-00T00:00:00Z/9,+1594-00-00T00:00:00Z/9 Q43367,Q41734,Q43664,+1584-00-00T00:00:00Z/9,+1594-00-00T00:00:00Z/9 Q43367,Q41734,Q36857,+1594-00-00T00:00:00Z/9,+1631-00-00T00:00:00Z/9 Q43367,Q43199,Q43664,,,+1627-00-00T00:00:00Z/9
Dr. Olaf Simons Forschungszentrum Gotha der Universität Erfurt Schloss Friedenstein, Pagenhaus 99867 Gotha
Büro: +49-361-737-1722 Mobil: +49-179-5196880
Privat: Hauptmarkt 17b/ 99867 Gotha
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata
On Wed, 8 May 2019 at 13:05, Andrew Gray andrew@generalist.org.uk wrote:
The every-politician people also developed a script specifically for this sort of P39 situation (https://www.wikidata.org/wiki/User:PositionStatements_Bot) but I don't know if their code is available
The code for PositionStatements (which uses the same format of statement as QuickStatements, but creates a new 'position held' claim each time, rather than unhelpfully adding all the qualifiers to the first statement found) is at https://github.com/everypolitician/position_statements
Tony