Hoi,
Bolderdash and Wikipedia think. When you think Wikipedia has quality, and it has, it does not have absolute quality. I have added a lot of information from Wikipedia to Wikidata and there is a lot that is plain wrong from a data perspective, there are the errors and there is a lot that is just missing. This is particularly true when the subject is not really what people are interested in. Things like the Polk award, subdistricts of Botswana the list is long. I am adding much of the information by hand, add missing parts and the main use for the missing data is in the relations.
As I have said so often, quality of data is in having the same data in multiple sources. It follows that the data that can safely be added to Wikidata is the data where multiple sources agree on the represented facts. This is done easiest by bots and indeed there algorithms are defined in their code. When new data is included based on a multitude of sources, what is the source? Particularly when data is inconsistent as multiple sources cannot agree on specific data, sources become relevant but it is also where you go into real research.
Arguably, when data sources differ, you easily get into disputed facts and fake facts. This is where sourcing the facts becomes relevant. It is also where you get into real research and where as a consequence the license of the information becomes irrelevant.
In my opinion, we have grown up thinking in serial sourcing and particularly when you apply this approach on data stores like Wikidata your algorithms and thinking fails reality.
Thanks,
GerardM