I have no qualms with any of the guidelines. They are good guidelines but
like all guidelines they are made to be bent when appropriate so long as
you leave a good explanatory comment. My main concern is that the article
is about test how to write more unit testable code which is something I
think people take too far. The thing that unit tests are good for is
testing that a "unit" of code does what you expect it to. The problem is
that people sometimes test portions of atomic units without testing the
whole unit. Java folks are especially dogmatic about testing just one
class at a time which is a great guideline but tends to be the wrong thing
to do about 20% of the time.
My favorite example of this is testing a Repository or a DAO with a mock
database. A repository's job is to issue the correct queries to the
database and spit the results back correctly. Without talking to an actual
database you aren't testing this. Without some good test data in that
database you aren't testing this. I'd go so far as to say you have to talk
to _exactly_ the right database (MySQL in our case) but other very smart
people disagree with me on that point.
While this example is especially silly I'm sure we've all finished writing
a tests, looked at the test code and thought, "This test proves that I'm
interacting correctly with collaborator objects but doesn't prove that my
functionality is correct." Sometimes this is caused by collaborators being
non-obvious. Sometimes this is caused by global state that you have to
work around. In any case I'd argue that these tests should really be
deleted because all they really do is make your code coverage statistics
better, give you a false sense of security, and slow down your builds.
So I just wrote a nice little wall of text about what is wrong with the
world and like any good preacher I'll propose a few solutions:
1. Live with having bigger units. Call the tests an integration test if
it makes you feel better. I don't really care. But you have to stand up
the whole database connection, populate it with test data
that mimics production in a useful sense, and then run the query.
2. Build smaller components sensibly and carefully. The goal is to be
able to hold all of the component in your head at once and for the
component to present such a clean API that when you mock it out tests are
meaningful.
2. Write tests that test the entire application after it is started with
stuff like Selenium. The disadvantage here is that these run way slower
than unit tests and require you learn yet another tool. Too bad. Some
stuff is simply untestable without a real browser like Tim's HTML forms.
3. Use lots of static analysis tools. They really do help identify dumb
mistakes and don't even require you do anything other than turn them on,
run them before you commit, and fail the build when they fail. Worth it.
4. Don't write automated tests at all and do lots of code reviews and
manual testing. Sometimes this is really the most sensible thing. I'll
leave it to you to figure out when that is though.
There is a great presentation on InfoQ about unit testing that I can't find
anymore where the presenter likens testing to guard rails. He claims that
just because you have guard rails you shouldn't stop paying attention and
expect them to save you.
Sorry for the rambling wall of text.
Nik
On Mon, Jun 3, 2013 at 7:58 AM, Daniel Kinzler <daniel(a)brightbyte.de> wrote:
Thanks for your thoughtful reply, Tim!
Am 03.06.2013 07:35, schrieb Tim Starling:
On 31/05/13 20:15, Daniel Kinzler wrote:
> "Writing Testable Code" by Miško Hevery
> <
http://googletesting.blogspot.de/2008/08/by-miko-hevery-so-you-decided-to.h…
.
>
> It's just 10 short and easy points, not some rambling discussion of
code
philosophy.
I'm not convinced that unit testing is worth doing down to the level
of detail implied by that blog post. Unit testing is essential for
certain kinds of problems -- especially complex problems where the
solution and verification can come from two different (complementary)
directions.
I think testability is important, but I think it's not the only (or even
main)
reason to support the principles from that post. I think these principles
are
also important for maintainability and extensibility.
Essentially, they enforce modularization of code in a way that makes all
parts
as independent of each other as possible. This means they can also be
understood
by themselves, and can easily be replaced.
But if you split up your classes to the point of
triviality, and then
write unit tests for a couple of lines of code at a time with an
absolute minimum of integration, then the tests become simply a mirror
of the code. The application logic, where flaws occur, is at a higher
level of abstraction than the unit tests.
That's why we should have unit tests *and* integration tests.
I agree though that it's not necessary or helpful to enforce the maximum
possible breakdown of the code. However, I feel that the current code is
way to
the monolithic end of the spectrum - we could and should do a lot better.
So my question is not "how do we write code
that is maximally
testable", it is: does convenient testing provide sufficient benefits
to outweigh the detrimental effect of making everything else
inconvenient?
If there are indeed such detrimental effects. I see two main
inconveniences:
* More classes/files. This is, in my opinion, mostly a question of using
the
proper tools.
* Working with "passive" objects, e.g. $chargeProcessor->process( $card )
instead of $card->charge(). This means additional code for injecting the
processor, and more code for calling the logic.
That is inconvenient, but not detrimental, IMHO: it makes responsibilities
clearer and allows for easy substitution of logic.
As for the rest of the blog post: I agree with
items 3-8.
yay :)
I would
agree with item 1 with the caveat that value objects can be
constructed directly, which seems to be implied by item 9 anyway.
Yes, absolutely: value objects can be constructed directly. I'd even go so
far
as to say that it's ok, at least at first, to construct controller objects
directly, using servies injected into the local scope (though it would be
better
to have a factory for the controllers).
The
rest of item 9, and item 2, are the topics which I have been
discussing here and on the wiki.
To me, 9 is pretty essential, since without that principle, value objects
will
soon cease to be thus, and will again grow into the monsters we see in the
code
base now.
Item 2 is less essential, though still important, I think; basically, it
requires every component (class) to make explicit which other component it
relies on for collaboration. Only then, it can easily be isolated and
"transplanted" - that is, re-used in a different context (like testing).
Regarding item 10: certainly separation of
concerns is a fundamental
principle, but there are degrees of separation, and I don't think I
would go quite as far as requiring every method in a class to use
every field that the class defines.
Yes, I agree. Separation of concerns can be driven to the atomic level,
and at
some point becomes more of a pain than an aid. But we definitely should
split
more than we do now.
-- daniel
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l