I don't like "significant digits" because it depends on the writing system (base 10). I'd much rather express this as absolute values.
Yes, I would like too. What I argue is that the problem is that you simply in 99.99999 % (not a researched of number of course) of cases simply don't know more than that there is a given number of digits base 10. Whether that is meaningful or just sloppy or even a wilfull simplification (probably the vast majority of quantities in current Wikipedia belong to the latter category) is unknown.
That means that the figure is not usable for query answering at all. If we don't know the level of certainty, we cannot use the number.
that will usually be the case. Unless you know which kind of margin the numbers reflect, you cannot use it for answering anyways. What do you do with the two examples: 100 +/- 50 and 100 +/- 0.1 that are the results of the same dataset and precisely reflect the same quantity? If you know that the first is a 95% measure of dispersion, and the second a 95% CI for the mean, you can ask people whether they look for the mean (best estimate) or for a single observation.
Make the interval-points an option. If explicitly entered: excellent information. If not: don't try to create (false) knowledge from void.
Yes, it will be an option. Making the default "unknown" would be bad though, I think.
The default has to reflect reality. If you make it a complication to enter the actual default situation, and automatically add a margin of error/dispersion/tolerance whatever then people will simply allow it to happen, start ignoring it, don't understand it, and in the end Wikidata will be known as a bunch of unreliable encoded information.
However, we should probably store whether the level of certainty was given explicitly or estimated automatically based on the number of significant digits
- then we can still ignore automatic values when desired.
Which will force all re-users to understand this and to throw away these values prior to any analysis...
Why so complicated?
Gregor