In the DecimalMath class you escalate scaling on length of fractional part
during multiply. That is what they do in school, but it leads to false
precision. It can be argued that this is both wrong and right. Is there any
particular reason (use case) why you do that? I don't escalate scaling in
the Lua lib.
<rant>I see you have bumped into the problem whether precision is a last
digit resolution or a count of significant digits. There are a couple of
(several!) different definitions, and I'm not sure which one is right. 3 ±
0.5 meter is comparable to 120 ±10 inches, but interpretation of "3 meter"
as having a default precision of ±0.5 meter is problematic. It is easier to
see the problem if you compare with a prefix. What is the precision of 3000
meter vs 3 km? And when do you count significant digits? Is zero (0) a
Otherwise I find this extremely amusing. When I first mentioned this we got
into a fierce discussion, and the conclusion was that we should definitely
not use big numbers. Now we do. :D
On Mon, Oct 7, 2019 at 10:44 AM Daniel Kinzler <dkinzler(a)wikimedia.org>
Am 07.10.19 um 09:50 schrieb John Erling Blad:
Found a few references to bcmath, but some
weirdness made me wonder if
was bcmath after all. I wonder if the weirdness
is the juggling with
bcmath is missing.
I haven't looked at the code in five years or so, but when I wrote it,
was indeed bcmath with fallback to float. The limit of 127 characters
right, though I'm not sure without looking at the code.
Quantity is based on Number, with quite a bit of added complexity for
between units while considering the value's precision. e.g. "3 meters"
not turn into "118,11 inch", but "118 inch" or even "120
inch", if it's the
default +/- 0.5 meter = 19,685 inch, which means the last digit is
insignificant. Had lots of fun and confusion with that. I also implemented
rounding on decimal strings for that. And initially screwed up some edge
which I only realized when helping my daughter with her homework ;)
Principal Software Engineer, Core Platform