On 20/04/2018 03:18, Pine W wrote:
Cool. Pardon my novice level understanding of containers and devops. Am I correct in saying that the plan is to use Docker to improve the efficiency of testing for MediaWiki?
It is partly about efficiency. We have jobs running on a pool of virtual machine on top of the OpenStack cluster, the instances are deleted after each build. A software (Nodepool) takes care of the deletion and replenish the pool by asking for new instances to be started.
The pool is fairly limited in size and rate limited since any time it asks for too many request the OpenStack cluster gets in trouble. That has been a source of pain and headhaches and is overall quite slow.
Docker addresses that part: * we provide the frozen environments CI uses to run tests * anyone can download them with a docker pull * the Docker container is spawned at start of the build and typically takes just a few miliseconds to start
It is also next to impossible to reproduce a CI build for MediaWiki. There are too many requirements: * the image used to spawn the virtual machine, which is not publicly available * Parameters being injected by the workflow system (Zuul) * Jenkins jobs written in yaml (integration/config) * Shell scripts in a git repo (integration/jenkins)
Quibble addresses that second part: it aggregates all the logic and flow in a single script. That is arguably easier to run.
Bonus: the Docker containers contain Quibble. Hence the container has everything one needs to properly reproduce a build.
As for the efficiency, there are a few optimizations that are left to be done. Namely: clone the repositories in parallel and skipping some tests when we know another job already ran it. For example running the JavaScript eslint check should only be done once.
But overall, yes the switch should make it faster to get feedback on changes and to have them merged.