On 01/14/2015 06:57 PM, James Douglas wrote:
Howdy all,
Recently we've been playing with tracking our code coverage in Services
projects, and so far it's been pretty interesting.
Based on your coverage work for restbase, we added code coverage using
the same nodejs tools (instanbul) and service (coveralls.io) for Parsoid
as well (
https://github.com/wikimedia/parsoid; latest build:
https://coveralls.io/builds/1744803).
So far, we learnt that our coverage (via parser tests + mocha for other
bits) is pretty decent and that a lot of our uncovered areas are in code
that isn't yet enabled in testing (ex: tracing, debugging, logging), or
not tested sufficiently because that feature is not enabled in
production yet.
But, I've also seen that there are some edge cases and failure scenarios
that aren't tested via our existing parser tests. The edge case coverage
are for scenarios that we saw in production but (at the time when we
fixed those issues in code) for which we didn't add a sufficiently
reduced parser test. As for the failure scenarios, we might need testing
via mocha to simulate them (ex: cache failures for selective
serialization, or timeouts, etc.).
Some of the edge case scenario and more aggressive testing is taken care
of by our nightly round-trip testing on 160K articles.
But, adding this has definitely revealed gaps in our test coverage that
we should / will address in the coming weeks, but at the same time, it
has verified my / our intuition that we have pretty high coverage via
parser tests that we constantly update and add to.
Subbu.