Hoi,
Much of the content of DBpedia and Wikidata have the same origin;
harvesting data from a Wikipedia. There is a lot of discussion going on
about quality and one point that I make is that comparing "Sources" and
concentrating on the differences particularly where statements differ is
where it is easiest to make a quality difference.
So given that DBpedia harvests both Wikipedia and Wikidata, can it provide
us with a view where a Wikipedia statement and a Wikidata statement differ.
To make it useful, it is important to subset this data. I will not start
with 500.000 differences but I will begin when they are about a subset that
I care about.
When I care about entries for alumni of a university, I will consider
curating the information in question. Particularly when I know the language
of the Wikipedia.
When we can do this, another thing that will promote the use of a tool like
this is when regularly (say once a month) numbers are stored and trends are
published.
How difficult is it to come up with something like this. I know this tool
would be based on DBpedia but there are several reasons why this is good.
First it gives added relevance to DBpedia (without detracting from
Wikidata) and secondly as DBpedia updates on RSS changes for several
Wikipedias, the effect of these changes is quickly noticed when a new set
of data is requested.
Please let us know what the issues are and what it takes to move forward
with this, Does this make sense?
Thanks,
GerardM
http://ultimategerardm.blogspot.nl/2017/03/quality-dbpedia-and-kappa-alpha-…