"Chad" innocentkiller@gmail.com wrote in message news:BANLkTinRbx45WSG6YycLWpZpO7MJb3RwWA@mail.gmail.com...
On Tue, May 31, 2011 at 7:49 PM, Happy-melon happy-melon@live.com wrote:
Every way of phrasing or describing the problem with MW CR can be boiled down to one simple equation: "not enough qualified people are not spending enough time doing Code Review (until a mad rush before release) to match the amount of code being committed".
Maybe people shouldn't commit untested code so often.
I'm not joking.
-Chad
That's a worthy goal, but one that's orthogonal to Code Review. Every single person on this list will have committed some unreviewed code to some repository at some time; the fewer times you've done it, the more likely you are to have crashed the cluster the times you did. People doing some unquantifiably greater amount of testing doesn't mean we can spend any less time on CR per revision. Automated testing, regression testing, other well-defined infrastructures (I think Ryan's Nova Stack is going to be of huge benefit in this respect) *do* save CR time *because reviewers know exactly what has been tested*. A policy like "every bugfix must include regression tests" would definitely improve things in that area.
Of course, it's undeniable that more testing would lead to fewer broken commits, and that that's a Good Thing. If we implement processes which set a higher bar for commits 'sticking' in the repository, whether that's pre-commit review, a branch/integrate model, post-commit countdown, whatever; people will rise to that level.
--HM