[QA] Forget developers in test, we need testers in development

Nikolas Everett neverett at wikimedia.org
Thu Jan 2 16:37:43 UTC 2014


On Thu, Jan 2, 2014 at 10:50 AM, Ċ½eljko Filipin <zfilipin at wikimedia.org>wrote:

> Hi Nik,
>
> apologies for the late reply, I was on vacation. Comments are inline.
>
>
> On Sat, Dec 21, 2013 at 8:18 PM, Nikolas Everett <neverett at wikimedia.org>wrote:
>
>> 1.  What do you do with developers that think it is beneath them to write
>> integration tests?  Unit tests?
>>
>
> I would also ask why would a developer think that. All code is created
> equal, right? Writing test code should be not different than writing
> production code. Or am I wrong?
>

I've worked with plenty of developers who won't write browser tests.
They've been sold on unit testing but won't pick up selenium because it is
either beneath them or too complicated or they just don't feel they have
time to learn another skill.  Or some combination of all of that.


>
>
>> 2.  Where should you store those tests?  (We've mostly settled on next to
>> the production code which seems like the right answer to me for lots of
>> reasons.)
>>
>
> +1
>
>
>>  2.  What code/features/changes should have unit tests?  Integration
>> tests?  What is the difference, any way?
>>
>
> Looks like you have two questions with number 2. :)
>

Numbers are hard.


>
> As a rule of thumb, code that is more likely to break, that is hard to
> understand, that is written recently... should have more tests than other
> code.
>
> I would say that most (if not all code) should have unit tests, and things
> that are hard or impossible to test with an unit test should be tested on a
> higher level.
>
> The difference?
>
> "A test is not a unit test if:
>
> - It talks to the database
> - It communicates across the network
> - It touches the file system
> - It can't run at the same time as any of your other unit tests
> - You have to do special things to your environment (such as editing
> config files) to run it."[1]
>

I'd argue that lots that defining what your unit is in unit testing is
super important and very easy to fail to do right.  If you do it wrong you
write unit tests that don't prove anything.  I've seen developers grow to
hate unit testing because they realize that they aren't proving anything
but refuse to grow their units.  I have a single rule of thumb which I've
decided to call the Magic Rule:
If code relies on complicated invocations to an external system to do a
major portion of its work then the external system must be part of the
unit.  You can't prove Harry Potter's magic works by shouting spells and
swishing around a stick.

Or thought of in a different way:
The value of a test is inversely proportional to the number of mocks it
interacts with.

You can replace "complicated invocations to an external system" with stuff
like:
If code relies on SQL more complicated than "SELECT * FROM foo WHERE foo_id
= ?" to do a major portion of its work then the database must be part of
the unit.  That includes the target version of the database, the production
schema, and the best (as ugly as production) test data you can get.
If code relies on the vagueries of posix file operations then the file
system must be part of the unit.

When you follow this rule you end up with more integration tests then
normal.  Sometimes you don't end up with any unit tests at all.  To me,
that is OK.


>
>> 3.  How do you make sure you stick to #3?
>>
>
> Well, this is #3, but I guess you were asking about previous question. I
> am not sure what to do.
>

Yeah that was about the last questions.  Lots of corporate shops measure
unit test coverage.  Measure it and stop there.  Some fail builds if the
coverage isn't above a certain level.  Lots of stuff.  I'm of the opinion
that if you measure unit test coverage you ought to measure integration
test coverage too.  While those figures themselves are neat the union is
actually more important.


>
>
>> 4.  Where does documentation live?  Do you build documentation from
>> tests?  Do you build tests from documentation?  (Which one of these do we
>> do or is it even listed?)
>>
>
> I have been struggling with this myself. Feature files are one form of
> documentation.
>

Elasticsearch doesn't use its test as documentation at all.  They instead
force developers who submit pull requests to always write the feature, the
tests, and the documentation all in the same pull request.  I like this
method because it keeps you honest.  If the tests can build some or all of
the documentation then that is better.


>
>
>>  5.  If integration testing is really just another speciality like
>> databases what does this mean for people that have spent years doing one or
>> the other?
>>
>
>  I am not sure what you ask here.
>

If the brave new world of everyone just being an engineer comes about, what
happens to the folks with 20 years of testing experience and none of
writing production code?  They transition to being an expert in testing who
writes some production code, but how?  What does that mean from culture
standpoint?


>
>
>> 6.  What do you do when your organization doesn't jib with this reality?
>> (WMF has a QA team when it might make more sense to matrix integration test
>> specialists into teams to teach them how to write integration tests, for
>> example.)
>>
>
> We are moving towards integrating QA team into other teams. The problem is
> that there is just a few of us and a lot of teams.
>

Platform doesn't make that easy either because we aren't really a single
team.  I mean, we have a single manager and meet once a week but we
actually break into smaller teams to do projects.  Some last a long time
like search but some are only a few weeks or days.

I wonder, also, who because a member of the QA team if everyone is an
engineer?  I don't think just writing integration tests or browser tests is
good enough because we really want almost everyone doing that.


>
>
>>  7.  Holy cow how does this all line up with volunteers who only want to
>> do one thing or the other and we really should take any help we can get.
>>
>
> I do not think it will be trivial, but I am pretty sure we can handle
> volunteers that just want to test or just want to code. :)
>
>
>> 8.  Who pays attention to build failures and what do they do about them?
>>
>
> A lot of people actually. Mobile team, language team, QA team... When a
> build fails, we fix it.
>
>
>>  9.  What system, exactly, are those build failures testing?
>>
>
> A build can fail for several reasons. We had an entire workshop about
> it[2].
>

>
>> Now that I think about it, who is coming to the architecture summit?
>> These kinds of questions would be pretty interesting to talk about and
>> might deserve a (late) RFC.
>>
>
> I am not sure if anybody from QA team is coming.
>

That is a shame because these questions are interesting.  Not because we
don't have answers to them but because those answers matter more than the
rest of WMF really pays attention to.  At least that is what having no one
from QA at the architecture summit makes me think.

Nik
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.wikimedia.org/pipermail/qa/attachments/20140102/37d125eb/attachment-0001.html>


More information about the QA mailing list