On Wed, Oct 14, 2009 at 3:51 PM, Aryeh Gregor Simetrical+wikilist@gmail.com wrote:
On Wed, Oct 14, 2009 at 6:09 PM, Brion Vibber brion@pobox.com wrote:
Throwing in a huge number of committers at once is problematic as well since that means more time is going to be spent in post-commit review; new committers generally need some time to really ramp up.
Pre-commit review -- eg checking then committing patches -- takes longer, but is safer. It's a balance that needs to be struck and yeah, it's tricky.
Ideally we'd have a much faster, cleaner pre-commit review pipeline and keep the number of general core committers relatively small. (We wouldn't have nearly as many committers as we do -- over 100! -- if patch review were consistent.)
Given the existence of wmf-deployment and release branches, though, the distinction between pre-commit and post-commit review seems mostly academic. MediaWiki trunk is pretty much just a place where people believed to be reasonably sane can throw up code for broader review and inspection prior to actual deployment or release. Having a central trunk with lots of committers reduces problems with merge conflicts and gets patches tested earlier in the development process, compared to a pre-commit-review system.
Post-commit review is a big problem for us, but it's because nobody likes reviewing code and not many people are really qualified to, anyway. Places like Mozilla and Linux that do pre-commit review have a much larger number of developers paid to do review, as far as I can tell. From what I've seen, with Mozilla, you assign your patch to someone and they actually look at it fairly quickly. Linux is its own crazy world, which we need not even talk about as long as we're not using a DVCS, but a lot of people there (like Linus Torvalds) spend much more time reviewing than coding, as I understand it.
As long as we have only a couple of people who are a) entrusted with reviewing code for deployment/release and b) paid to do so, I imagine review will remain a problem. Shutting out new committers is one way to mitigate it, but it's not a very healthy way for MediaWiki. At this point, I'd imagine one person paid to do full-time review would be enough to get us back to a reliable weekly deployment schedule with time left over, even if we add a bunch more committers.
Then again, I say all this as someone who hasn't even had time to look at the last 1000+ commits. :(
I'd agree with Aryeh. It makes sense to have most development work occur in the SVN with relatively light standards for allowing commit access. It increases the eyeballs and keeps everyone on the same page. One needs a visible and intelligible process for managing the development of new features (which has as much to do with social engineering and management as it does with technology) but much of the current development work to fix bugs and make small improvements is not really a big deal. And to be honest, there aren't that many commits per day.
However, we could greatly improve the process by having more automated tools providing developer feedback. There is no reason we couldn't check the performance and behavior of every code revision. Measuring things like execution time, memory load, SQL calls, and other variables coupled to tests looking for differences in rendered page appearance, could catch code with poor performance or unexpected behavior earlier and without requiring a manual review by the experts. Obviously expert review will always be part of the process, but a lot of things can be done to make their lives easier. And that in turn should help reduce the review bottlenecks. WMF's plans to hire someone to run the review process will also make a big difference.
-Robert Rohde