Apologies for cross-posting
Dear all,
We are proud to announce DBpedia Archivo (https://archivo.dbpedia.org)
an augmented ontology archive and interface to implement FAIRer
ontologies. Each ontology is rated with 4 stars measuring basic FAIR
features. We discovered 890 ontologies reaching on average 1.95 out of 4
stars. Many of them have no or unclear licenses and have issues w.r.t.
retrieval and parsing.
# Community action on individual ontologies
We would like to call on all ontology maintainers and consumers to help
us increase the average star rating of the web of ontologies by fixing
and improving its ontologies. You can easily check an ontology at
https://archivo.dbpedia.org/info. If you are an ontology maintainer just
release a patched version - archivo will automatically pick it up 8
hours later. If you are a user of an ontology and want your consumed
data to become FAIRer, please inform the ontology maintainer about the
issues found with Archivo.
The star rating is very basic and only requires fixing small things.
However, theimpact on technical and legal usability can be immense.
# Community action on all ontologies (quality, FAIRness, conformity)
Archivo is extensible and allows contributions to give consumers a
central place to encode their requirements. We envision fostering
adherence to standards and strengthening incentives for publishers to
build a better (FAIRer) web of ontologies.
1.
SHACL (https://www.w3.org/TR/shacl/, co-edited by DBpedia’s CTO D.
Kontokostas) enables easy testing of ontologies. Archivo offers free
SHACL continuous integration testing for ontologies. Anyone can
implement their SHACL tests and add them to the SHACL library on
Github. We believe that there are many synergies, i.e. SHACL tests
for your ontology are helpful for others as well.
2.
We are looking for ontology experts to join DBpedia and discuss
further validation (e.g. stars) to increase FAIRness and quality of
ontologies. We are forming a steering committee and also a PC for
the upcoming Vocarnival at SEMANTiCS 2021. Please message
hellmann(a)informatik.uni-leipzig.de
<mailto:hellmann@informatik.uni-leipzig.de>if you would like to
join. We would like to extend the Archivo platform with relevant
visualisations, tests, editing aides, mapping management tools and
quality checks.
# How does Archivo work?
Each week Archivo runs several discovery algorithms to scan for new
ontologies. Once discovered Archivo checks them every 8 hours. When
changes are detected, Archivo downloads and rates and archives the
latest snapshot persistently on the DBpedia Databus.
# Archivo's mission
Archivo's mission is to improve FAIRness (findability, accessibility,
interoperability, and reusability) of all available ontologies on the
Semantic Web. Archivo is not a guideline, it is fully automated,
machine-readable and enforces interoperability with its star rating.
- Ontology developers can implement against Archivo until they reach
more stars. The stars and tests are designed to guarantee the
interoperability and fitness of the ontology.
- Ontology users can better find, access and re-use ontologies.
Snapshots are persisted in case the original is not reachable anymore
adding a layer of reliability to the decentral web of ontologies.
Let’s all join together to make the web of ontologies more reliable and
stable,
Johannes Frey, Denis Streitmatter, Fabian Götz, Sebastian Hellmann and
Natanael Arndt
Paper: https://svn.aksw.org/papers/2020/semantics_archivo/public.pdf
Hi all,
Join the Research Team at the Wikimedia Foundation [1] for their monthly
Office hours on 2020-12-01 at 17:00-18:00 PM UTC (9am PT/6pm CET).
To participate, join the video-call via this Wikimedia-meet link [2]. There
is no set agenda - feel free to add your item to the list of topics in the
etherpad [3] (You can do this after you join the meeting, too.), otherwise
you are welcome to also just hang out. More detailed information (e.g.
about how to attend) can be found here [4].
Through these office hours, we aim to make ourselves more available to
answer some of the research related questions that you as Wikimedia
volunteer editors, organizers, affiliates, staff, and researchers face in
your projects and initiatives. Some example cases we hope to be able to
support you in:
-
You have a specific research related question that you suspect you
should be able to answer with the publicly available data and you don’t
know how to find an answer for it, or you just need some more help with it.
For example, how can I compute the ratio of anonymous to registered editors
in my wiki?
-
You run into repetitive or very manual work as part of your Wikimedia
contributions and you wish to find out if there are ways to use machines to
improve your workflows. These types of conversations can sometimes be
harder to find an answer for during an office hour, however, discussing
them can help us understand your challenges better and we may find ways to
work with each other to support you in addressing it in the future.
-
You want to learn what the Research team at the Wikimedia Foundation
does and how we can potentially support you. Specifically for affiliates:
if you are interested in building relationships with the academic
institutions in your country, we would love to talk with you and learn
more. We have a series of programs that aim to expand the network of
Wikimedia researchers globally and we would love to collaborate with those
of you interested more closely in this space.
-
You want to talk with us about one of our existing programs [5].
Hope to see many of you,
Martin (WMF Research Team)
[1] https://research.wikimedia.org/team.html
[2] https://meet.wmcloud.org/ResearchOfficeHours
[3] https://etherpad.wikimedia.org/p/Research-Analytics-Office-hours
[4] https://www.mediawiki.org/wiki/Wikimedia_Research/Office_hours
[5] https://research.wikimedia.org/projects.html
--
Martin Gerlach
Research Scientist
Wikimedia Foundation