On Wed, Apr 30, 2008 at 12:20 AM, Philip Sandifer snowspinner@gmail.com wrote:
I think the response is better summarized as "it is impossible to reduce the error rate to 0 with a population of this size." Accordingly, anecdotal evidence does not seem to me to be significant,
[snip]
There are two classes of errors that can show up in a production process, random and systemic. It is important to understand how each class contributes to the overall error rate because the methods used to address them are different.
Systemic errors can be eliminated. It doesn't matter if you are making 10 or 10 million things, the systemic part is equally eliminatble, and the more things you make the greater the payoff of eliminating the systemic error.
The random contribution can not be (except in so far as whatever isn't yet done to reduce random errors can be regarded as a systemic error).
I realize the above definition is somewhat circular. But its important distinction between systemic and random errors has nothing to do with their frequency, but everything to do with whether they can be controlled or not.
Once the systemic errors are gone there are only two efficient and effective ways of handing the remaining random error. You can ignore it and suffer the consequences, or you can decide that it is acceptable and test every single thing produced for it, fixing it when you find it. Sampled testing is never an effective solution to random errors because if there is no systemic error your sample tells you absolutely nothing about the rest of the set.
English Wikipedia is rife with errors of systemic class, things which we could control/reduce, and I think you will agree with this even if you and I do not agree on all of what those errors are or how to resolve them.
These ought to be fixed, and the fact that fixing the things we can control doesn't solve the things we can't is not a valid excuse not to try.