On Tue, Oct 27, 2009 at 9:47 PM, Ryan Lane
<rlane32(a)gmail.com> wrote:
I think the hardest
part is going to be keeping the tests up to date with the code.
That's pretty easy -- just have Code Review complain whenever anyone
causes a test failure, and force them to fix the tests they broke or
get reverted. This assumes that it's easy to look at the changed
output and change the test so it's marked correct, though. That seems
like it *should* be the case, at a glance, if the tests are
well-written.
I'd encourage folks to start small with this.
One of the common failure modes of projects adopting automated
end-to-end tests (like Selenium) is that somebody divorced from the
development process makes a bunch of tests, often with playback and
record tools. As developers change things, they have a hard time
figuring out what tests mean or how to update them; that can either
reduce commits or encourage people to disable tests rather than updating
them.
I say this not to discourage the effort; I'm a huge fan of automated
tests. I just think that to maximize the chance of success, it's better
for the early focus to be on utility and sustainability rather than
maximum test coverage.
William