In the DecimalMath class you escalate scaling on length of fractional part during multiply. That is what they do in school, but it leads to false precision. It can be argued that this is both wrong and right. Is there any particular reason (use case) why you do that? I don't escalate scaling in the Lua lib.
<rant>I see you have bumped into the problem whether precision is a last digit resolution or a count of significant digits. There are a couple of (several!) different definitions, and I'm not sure which one is right. 3 ± 0.5 meter is comparable to 120 ±10 inches, but interpretation of "3 meter" as having a default precision of ±0.5 meter is problematic. It is easier to see the problem if you compare with a prefix. What is the precision of 3000 meter vs 3 km? And when do you count significant digits? Is zero (0) a significant digit?</rant>
Otherwise I find this extremely amusing. When I first mentioned this we got into a fierce discussion, and the conclusion was that we should definitely not use big numbers. Now we do. :D