On Mon, Jun 3, 2013 at 10:20 AM, Jeroen De Dauw <jeroendedauw(a)gmail.com>wrote;wrote:
4. Don't write automated tests at all and do lots of code reviews and
manual testing. Sometimes this is really the
most sensible thing. I'll
leave it to you to figure out when that is though.
Absolutist statements are typically wrong. There are almost always cases in
which some practice is not applicable. However I strongly disagree with
your recommendation of not writing tests and automating them. I disagree
even stronger with the notion that manual testing is generally something
you want to do. I've seen many experts in the field of software design
recommend strongly against manual testing, and am currently seeing the same
theme being pretty prevalent here at the International PHP Conference I'm
currently attending.
I think not having automated tests is right in some situations but I
certainly wouldn't recommend it. Manual testing sucks and having nice
tests with Selenium or some such tool is way better in most situations but
there are totally times where a good code review and manual verification
are perfect. I'm thinking of temporary solutions or styling issues are
difficult to verify with automated tests. I'm certainly no expert and I'd
_love_ to learn more about things that help in the situations where I feel
like manual testing is best. I'd love nothing more than to be wrong.
So my question is not "how do we write code that is maximally
testable", it is: does convenient testing
provide sufficient benefits
to outweigh the detrimental effect of making everything else
inconvenient?
This contains the suggestion that testable code inherently is badly
designed. That is certainly not the case. Good design and testability go
hand in hand. One of the selling points of testing is that it strongly
encourages you to create well designed software.
IMHO you can design code so that it is both easy to understand and easy to
test but there is a real temptation to sacrifice comprehenability for
testability. Mostly I see this in components being split into
incomprehensibly small chunks and then tested via an intricate mock waltz.
I'm not saying this happens all the time, only that this happens and we
need to be vigilent. The guidelines in the article help prevent such
craziness.
There are other advantages to writing tests as well.
Just out of the top of
my head:
* Regression detection
* Replaces manual testing with automated testing, saves lots of time, esp
in projects with multiple devs. Manual testing tends to be incomplete and
skipped as well, so the number of bugs caught is much lower. And it does
not scale. At all.
* Documentation so formal it can be executed and is never out of date
* Perhaps the most important: removes the fear of change. One can refactor
code to clean up some mess without having to fear one broke existing
behavior. Tests are a great counter to code rot. Without tests, your code
quality is likely to decline.
This is perfect! If you think of your tests as formal verification
documents then you are in good shape because this implies that the tests
are readable.
If I had my druthers I'd like all software to be designed in such a way
that it can be tested automatically with informative tests that read like
documentation. We'd all like that. To me it looks like there are three
problems:
1. How do you keep out tests that are incomprehensible as documentation?
2. What do you do with components for which no unit test can be written
that could serve as documentation?
3. What do you do when the formal documentation will become out of date so
fast that it feels like a waste of time to write it?
I really only have a good answer for #2 and that is to test components
together like the DB and Repository or the server side application and the
browser.
1 troubles me quite a bit because I've found those tests to be genuinely
hurtful in that they give you the sense that you are acomplishing something
when you aren't.
Nik