Forgot to mention; this is not really about _quality_, Gerrard says model of quality, it is about _trust_ and _reputation_. Something can have low quality and high trust, ref cheap cell phones, and the reputation might not reflect the actual quality.

You (usually) measure reputation and calculate trust, but I have seen it the other way around. The end result is the same anyhow.

On Wed, Mar 22, 2017 at 3:31 PM, John Erling Blad <jeblad@gmail.com> wrote:
Sitelinks to an item are an approximation of the number of views of the data from an item, and as such gives an approximation to the likelihood of detecting an error. Few views imply a larger time span before an error is detected. It is really about estimating quality as a function of the age of the item as number of page views, but approximated through sitelinks.

Problem is, the number of sitelinks is not a good approximation. Yes it is a simple approximation, but it is still pretty bad.

References are an other way to verify the data, but that is not a valid argument against measuring the age of the data.

I've been toying with an idea for some time that use statistical inference to try to identify questionable facts, but it will probably not be done - it is way to much work to do in spare time.