On 03/07/2014 10:08 AM, Greg Grossmeier wrote:
What we should do, however, is have a true "deployment pipeline". Briefly defined: A deployment pipeline is a sequence of events that increase your confidence in the quality of any particular build/commit point.
A typical example is: commit -> unit tests -> integration tests -> manual tests -> release
This is pretty much the way this currently works in Parsoid. We deploy twice per week, with integration tests currently being our mass round-trip testing setup on 160k pages. Those tests take a few hours to run, so we only deploy revisions for which round-trip testing has finished. Anything uncovered there is fed back to improvements in parser tests, so over time it has become less common to catch regressions only in round-trip tests. With improved integration tests manual testing should also mostly be eliminated over time.
Really, a pipeline isn't a thing like your indoor plumping but more of a mindset/way of designing your test infrastructure. But, it means that you keep things self-contained (contrary to the mobile example above) and things progress through the pipeline in a predicable way/pace.
This is one of the big arguments for narrow interfaces and services.
In Parsoid we have small mock implementations of the MediaWiki API end points we use which allows us to run parser tests without a wiki in the background. Network services tend to be at a medium granularity (coarser than modules, finer than the entire system) with necessarily narrow interfaces. Doing much of the testing at this level often seems to strike a good balance between effort, run time (still suitable for CI) and capturing the interface behavior essential to users of the service.
Gabriel