Tim Starling wrote:
In the last week, I've been reviewing extensions that were written years ago, and were never properly looked at. I don't think it's appropriate to measure success in code review solely by the number of "new" revisions after the last branch point.
Code review of self-contained projects becomes easier the longer you leave it. This is because you can avoid reading code which was superseded, and because it becomes possible to read whole files instead of diffs. So maintaining some amount of review backlog means that you can make more efficient use of reviewer time.
I agree. But that only works for extensions since: * They are self-contained * They are relatively small * They are not deployment blockers
And still they are harder to fix months later when the author has moved on (think in the poolcounterd bug).
I don't think that would work as well for core MediaWiki, albeit it may be feasible for not-so-big features with a kill switch. Large commits changing many files would need a branch to be reviewable in a set. However, our problem with branches is that it removes almost all peer review and testing, and merges are likely to introduce subtle bugs. The late review drawbacks are also there.
Our current system links version control with review. After a developer has done a substantial amount of work, they commit it. That doesn't necessarily mean they want their code looked at at that point, they may just want to make a backup.
How do you propose to fix it? The committer deferring its own revision? It may be worth making a list of review requests at mediawiki.