On Tue, May 31, 2011 at 12:37 PM, Max Semenik <maxsem.wiki(a)gmail.com> wrote:
On 31.05.2011, 22:41 Rob wrote:
So, there's a number of possible solutions to
this problem. These are
independent suggestions, but any of these might help:
1. We say that a commit has some fixed window (e.g. 72 hours) to get
reviewed, or else it is subject to automatic reversion. This will
motivate committers to make sure they have a reviewer lined up, and
make it clear that, if their code gets reverted, it's nothing
personal...it's just our process.
2. We encourage committers to identify who will be reviewing their
code as part of their commit comment. That way, we have an identified
person who has license to revert if they don't understand the code.
Social problems are hard to fix using technical means. What will
happen if someone approves a revision that later turns out to be
defective, off with their head?
This isn't a technical solution, but a social one to force us to make sure
that things that haven't been looked over GET looked at.
Consider it to already be our hypothetical model -- it's just not been
enforced as consistently as it's meant to. Bad code -> fixed or reverted
ASAP. Unseen code -> assumed bad code, so look at it.
"OMG my commit will be wasted in 20
minutes! Heeelponeone" is not the best scenario for clear thinking.
Time pressure might result in people reviewing stuff they're less
familiar with, and reviews being done in a haste, so CR quality will
suffer.
I'd say better to do that continuously and confirm that things are being
reviewed *and* run on automated tests *and* run against humans on
actual-production-use beta servers *and* in actual production use within a
reasonable time frame, than to let review slide for 10 months and rush it
all right before a giant release.
That IMO is what really hurts code review -- removing it from the context of
code *creation*, and removing the iterative loop between writing, debugging,
testing, debugging, deploying, debugging, fixing, and debugging code.
Although I'm not a biggest fan of rushing towards
DVCS, the
distributed heavily-branched model where mainline is supported only
through merges from committed-then-reviewed branches looks far more
superior to me. This way, we will also be able to push stuff to WMF
more often as there will be no unreviwewed or unfinished code in
trunk.
I tend to agree that a more active 'pull good fixes & features in from
feeder branches' model can be helpful, for instance in allowing individual
modules, extensions, etc to iterate faster in a shared-development
environment without forcing the changes to trunk/mainlines's production
deployment & release channels until they're ready.
But these need to be combined with really active, consistent management;
just like keeping the deployment & release branches clean, working, and up
to date consistently requires a lot of attention -- when attention gets
distracted, the conveyor belt falls apart and fixes & features stop reaching
the public until the next giant whole-release push, which turns into a
circus.
-- brion