Howdy all,
Recently we've been playing with tracking our code coverage in Services projects, and so far it's been pretty interesting.
We've learned about where the gaps are in our testing (which has even revealed holes in our understanding of our own specifications and use cases), and had fun watching the coverage climb with (nearly) each pull request.
I've slapped together some notes about our experience here:
https://github.com/wikimedia/restbase/tree/master/doc/coverage#code-coverage
I'd love to hear your thoughts and learn about your related experiences. What are your favorite code coverage tools and services?
Cheers! James
On 01/14/2015 06:57 PM, James Douglas wrote:
Howdy all,
Recently we've been playing with tracking our code coverage in Services projects, and so far it's been pretty interesting.
Based on your coverage work for restbase, we added code coverage using the same nodejs tools (instanbul) and service (coveralls.io) for Parsoid as well (https://github.com/wikimedia/parsoid; latest build: https://coveralls.io/builds/1744803).
So far, we learnt that our coverage (via parser tests + mocha for other bits) is pretty decent and that a lot of our uncovered areas are in code that isn't yet enabled in testing (ex: tracing, debugging, logging), or not tested sufficiently because that feature is not enabled in production yet.
But, I've also seen that there are some edge cases and failure scenarios that aren't tested via our existing parser tests. The edge case coverage are for scenarios that we saw in production but (at the time when we fixed those issues in code) for which we didn't add a sufficiently reduced parser test. As for the failure scenarios, we might need testing via mocha to simulate them (ex: cache failures for selective serialization, or timeouts, etc.).
Some of the edge case scenario and more aggressive testing is taken care of by our nightly round-trip testing on 160K articles.
But, adding this has definitely revealed gaps in our test coverage that we should / will address in the coming weeks, but at the same time, it has verified my / our intuition that we have pretty high coverage via parser tests that we constantly update and add to.
Subbu.
I'd love to use coveralls for the iOS app! I've thought it (and Travis) looked promising before, put seem especially relevant for mediawiki projects which are all OSS.
One other JS testing lib you guys should check out is JSVerify http://jsverify.github.io/, which is a port of Haskell's QuickCheck. This allows you to do property-based testing which is great for re-thinking your designs and program requirements as well as hitting edge cases that aren't feasible to think of ahead of time.
Happy to discuss more if anyone's interested, or you can watch these two interesting https://www.youtube.com/watch?v=JMhNINPo__g talks https://www.youtube.com/watch?v=HXGpBrmR70U about test.check https://github.com/clojure/test.check, a Clojure property-based testing library.
- Brian
On Wed, Jan 14, 2015 at 9:51 PM, Subramanya Sastry ssastry@wikimedia.org wrote:
On 01/14/2015 06:57 PM, James Douglas wrote:
Howdy all,
Recently we've been playing with tracking our code coverage in Services projects, and so far it's been pretty interesting.
Based on your coverage work for restbase, we added code coverage using the same nodejs tools (instanbul) and service (coveralls.io) for Parsoid as well (https://github.com/wikimedia/parsoid; latest build: https://coveralls.io/builds/1744803).
So far, we learnt that our coverage (via parser tests + mocha for other bits) is pretty decent and that a lot of our uncovered areas are in code that isn't yet enabled in testing (ex: tracing, debugging, logging), or not tested sufficiently because that feature is not enabled in production yet.
But, I've also seen that there are some edge cases and failure scenarios that aren't tested via our existing parser tests. The edge case coverage are for scenarios that we saw in production but (at the time when we fixed those issues in code) for which we didn't add a sufficiently reduced parser test. As for the failure scenarios, we might need testing via mocha to simulate them (ex: cache failures for selective serialization, or timeouts, etc.).
Some of the edge case scenario and more aggressive testing is taken care of by our nightly round-trip testing on 160K articles.
But, adding this has definitely revealed gaps in our test coverage that we should / will address in the coming weeks, but at the same time, it has verified my / our intuition that we have pretty high coverage via parser tests that we constantly update and add to.
Subbu.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
+1 for property-based testing. JSVerify's Haskell-like syntax makes it super easy to conjure up arbitrary generators.
On Thu, Jan 15, 2015 at 7:44 AM, Brian Gerstle bgerstle@wikimedia.org wrote:
I'd love to use coveralls for the iOS app! I've thought it (and Travis) looked promising before, put seem especially relevant for mediawiki projects which are all OSS.
One other JS testing lib you guys should check out is JSVerify http://jsverify.github.io/, which is a port of Haskell's QuickCheck. This allows you to do property-based testing which is great for re-thinking your designs and program requirements as well as hitting edge cases that aren't feasible to think of ahead of time.
Happy to discuss more if anyone's interested, or you can watch these two interesting https://www.youtube.com/watch?v=JMhNINPo__g talks https://www.youtube.com/watch?v=HXGpBrmR70U about test.check https://github.com/clojure/test.check, a Clojure property-based testing library.
- Brian
On Wed, Jan 14, 2015 at 9:51 PM, Subramanya Sastry ssastry@wikimedia.org wrote:
On 01/14/2015 06:57 PM, James Douglas wrote:
Howdy all,
Recently we've been playing with tracking our code coverage in Services projects, and so far it's been pretty interesting.
Based on your coverage work for restbase, we added code coverage using
the
same nodejs tools (instanbul) and service (coveralls.io) for Parsoid as well (https://github.com/wikimedia/parsoid; latest build: https://coveralls.io/builds/1744803).
So far, we learnt that our coverage (via parser tests + mocha for other bits) is pretty decent and that a lot of our uncovered areas are in code that isn't yet enabled in testing (ex: tracing, debugging, logging), or
not
tested sufficiently because that feature is not enabled in production
yet.
But, I've also seen that there are some edge cases and failure scenarios that aren't tested via our existing parser tests. The edge case coverage are for scenarios that we saw in production but (at the time when we
fixed
those issues in code) for which we didn't add a sufficiently reduced
parser
test. As for the failure scenarios, we might need testing via mocha to simulate them (ex: cache failures for selective serialization, or
timeouts,
etc.).
Some of the edge case scenario and more aggressive testing is taken care of by our nightly round-trip testing on 160K articles.
But, adding this has definitely revealed gaps in our test coverage that
we
should / will address in the coming weeks, but at the same time, it has verified my / our intuition that we have pretty high coverage via parser tests that we constantly update and add to.
Subbu.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-- EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle IRC: bgerstle _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 01/14/2015 04:57 PM, James Douglas wrote:
I'd love to hear your thoughts and learn about your related experiences. What are your favorite code coverage tools and services?
PHPUnit has a useful code coverage tool and there are reports running for core[1] and some extensions[2]. In my experience it's extremely CPU intensive and slow, so it's not something that is convenient to run before every commit.
[1] https://integration.wikimedia.org/cover/mediawiki-core/master/php/ [2] https://tools.wmflabs.org/coverage/
-- Legoktm
Le 15/01/2015 18:31, Legoktm a écrit :
On 01/14/2015 04:57 PM, James Douglas wrote:
I'd love to hear your thoughts and learn about your related experiences. What are your favorite code coverage tools and services?
PHPUnit has a useful code coverage tool and there are reports running for core[1] and some extensions[2]. In my experience it's extremely CPU intensive and slow, so it's not something that is convenient to run before every commit.
[1] https://integration.wikimedia.org/cover/mediawiki-core/master/php/ [2] https://tools.wmflabs.org/coverage/
Hello,
The core coverage job is run using Zend and xdebug. Would probably faster by using HHVM and XHProf. PHPUnit has a plugin for XHProf at https://github.com/sebastianbergmann/phpunit-testlistener-xhprof
If one can craft an entry point in mediawiki/core composer.json and add the relevant dependencies, that would make it easy for me to migrate the Jenkins job to it.
cheers,
wikitech-l@lists.wikimedia.org