Hello,

As the importance of Wikidata increases, so do the demands on the quality of the data. I would like to put the following proposal up for discussion.

Two basic ideas:
  1. Each Wikidata page (item) is scored after each editing. This score should express different dimensions of data quality in a quickly manageable way.
  2. A property is created via which the item refers to the score value. Certain qualifiers can be used for a more detailed description (e.g. time of calculation, algorithm used to calculate the score value, etc.).

The score value can be calculated either within Wikibase after each data change or "externally" by a bot. For the calculation can be used among other things: Number of constraints, completeness of references, degree of completeness in relation to the underlying ontology, etc. There are already some interesting discussions on the question of data quality which can be used here ( seeĀ  https://www.wikidata.org/wiki/Wikidata:Item_quality; https://www.wikidata.org/wiki/Wikidata:WikiProject_Data_Quality, etc).

Advantages

Disadvantage:

I would now be interested in the following:
  1. Is this idea suitable to effectively help solve existing quality problems?
  2. Which quality dimensions should the score value represent?
  3. Which quality dimension can be calculated with reasonable effort?
  4. How to calculate and represent them?
  5. Which is the most suitable way to further discuss and implement this idea?

Many thanks in advance.

Uwe JungĀ  (UJung)
www.archivfuehrer-kolonialzeit.de/thesaurus