On 09/29/2015 08:01 AM, Daniel Kinzler wrote:
Am 29.09.2015 um 11:05 schrieb Thomas Douillard:
No it's not, because of the "undoing" problem. A user can't delete a statement assuming this will be enough as he will not be explicit that the statement is bot added and implied by other statements, as opposed as a statement explicitely inferred by Wikibase and marked explicitely as such in the UI. If Wikibase tracks the root explicit statements used to make the inference, they could be exposed in the UI as well to tell the user what he might have to do to correct the mistake (closer to or) at the actual root.
I agree: if we had built in inference done Just Right (tm), with everything editable and visible in all the right places, that would be great. But this would add a lot of complexity to the system, and would take a lot of resources to implement. It would also diverge quite a bit from the classic idea of a wiki, potentially cause community issues.
The approach using bots was never ideal, but is still hugely successful on wikipedia. The same seems to be the case here. Also don't underestimate the fact that the community has a lot of experience with bots, but is generally very skeptical against automatic content (even just including information from wikidata on wikipedia pages).
So, while bots are not ideal, and a better solution is conceivable, I think bots as the optimal solution for the moment. We should not ignore the issues that exist with bots, and we should not lose sight of other options. But I think we should focus development on more urgent things, like a better system for source references, or unit conversion, or better tools for constraints, or for re-use on wikipedia.
I also strongly agree that inference-making tools should record their premises. There are lots of excellent reasons to do this recording, including showing editors where changes need to be made to remove the inferred claim. Inference-making bots that do not record how a claim was inferred are even worse than an inferential system that does not do so, as determining which bot made a particular inference is harder than determining which part of an inferential system sanctions a particular inference.
What is the difference between a system of inference-making bots that record their premises and an inferential system that records its premises? In some sense, not much. I would thus argue that an inferential system is no more complex than a set of inference-making bots.
However, an inferential system is not limited to the implementation techniques that are needed in a bot system. It can, for example, only perform some inferences on an as-needed basis. An inferential system also can be analyzed as a whole, something that is quite difficult with a bot system.
I would argue that inference-making bots should be considered only as a stop-gap measure, and that a different mechanism should be considered for making inferences in Wikidata. I am not arguing for Inference done Just Right (tm). It is not necessary to get inference perfect the first time around. All that is required is an inference mechanism that is examinable and maybe overridable.
Peter F. Patel-Schneider Nuance Communications