Hello,
I wanted to make you aware of our new paper "Doctoral Advisor or Medical
Condition: Towards Entity-specific Rankings of Knowledge Base Properties",
which deals with the problem of determining the interestingness of Wikidata
properties for individual entities.
In the paper we develop a dataset of 350 random (entity, property1,
property2) records, and use human judgments to determine the more
interesting property in each record.
We then show that state-of-the-art techniques (Wikidata Property Suggestor,
Google search) achieve 61% precision on predicting the winner in
high-agreement records, which can be lifted to 74% by using linguistic
similarity, but remains still significantly below human performance (87.5%
precision).
Paper:
http://www.simonrazniewski.com/2017_ADMA.pdf (to appear at ADMA
2017).
Dataset:
https://www.kaggle.com/srazniewski/wikidatapropertyranking
Best wishes,
Simon Razniewski