On Tue, Jun 4, 2013 at 12:36 PM, Jeroen De Dauw jeroendedauw@gmail.comwrote:
Hey,
My own experience is that "test coverage" is a poor evaluation metric
for anything but "test coverage"; it doesn't produce better code, and tends to produce code that is considerably harder to understand conceptually because it has been over-factorized into simple bits that hide the actual code and data flow. "Forest for the trees".
Test coverage is a metric to see how much of your code is executed by your tests. From this alone you cannot say if some code is good or bad. You can have bad code with 100% coverage, and good code without any coverage. You are first stating it is a poor metric to measure quality and then proceed to make the claim that more coverage implies bad code. Aside from contradicting yourself, this is pure nonsense. Perhaps you just expressed yourself badly, as test coverage does not "produce" code to begin with.
The thing is quite a few of us have seen cases where people bend over backwards for test coverage, sacrificing code quality and writing tests that don't provide any real value. In this respect high test coverage can poison your code. It shouldn't but it can.
The problem is rejecting changes like this while still encouraging people to write the useful kinds of tests - tests for usefully large chunks that serve as formal documentation. Frankly, one of my favorite tools in the world is Python's doctests because the test _is_ the documentation.
Nik