On 25/10/13 03:32, Daniel Friesen wrote:
On 2013-10-24 9:19 AM, Brad Jorsch (Anomie) wrote:
The claimed problem behind a lot of this is "too many dependencies" making things hard to test and the idea that you can somehow make this go away by dividing everything into tinier and tinier pieces. To some extent this works, but at the cost of making the system as a whole harder to understand because you have to track all the little pieces. I doubt MediaWiki has reached the point of diminishing returns on that, but I'm not really sure that the end-goal envisioned here is the *right* division.
Then there's the potential that individually testing a bunch of components doesn't always match what happens in the actually used code that puts everything together. So in the end you either have to also test the thing as a whole anyways. Or you end up with unit tests that are even less useful than you started because they won't catch regressions they would've originally.
Intuitively, it seems likely that if we reduce the typical scope and integration level of automated testing, we will end up missing bugs, because bugs which were previously testable due to being within a single unit will now be spread over multiple units. Presumably this effect is offset by an increase in test coverage.
However, I would prefer not to make such decisions by intuition alone. It would be nice to see some solid statistical data on this.
-- Tim Starling