As Ashar pointed out this week, we've fallen behind in code review. On Robla's page (http://toolserver.org/~robla/crstats/crstats.html) you can see that commits marked “new” is beginning to edge up again.
To help with code review, Roan introduced “sign-offs” for developers who are not as familiar with the MediaWiki code base. I'm sure he'll correct me if I'm wrong, but I would like to encourage any developer who isn't ready to mark code “OK” to use the sign-off feature — to indicate that they've tested or inspected the code.
If you're running trunk in your testing or (heaven forfend!) production, please try to see if you're exercising new code and give us feedback by marking the sign-off as “tested”. This is one of the best ways to get acquainted with the code base — if you miss something in your testing, we'll be sure to let you know!
Of course, if you've been reviewing code, THANK YOU and keep up the good work.
Mark
Hi,
I am happy to mark things as tested if that helps out :-)
But I need to say the trunk seems to be more instable... I use trunk for 3 production sites and its updated every morning... and seems to break every two days... Normally it would break once in the month...
Best,
Huib
Huib Laurens sterkebak@gmail.com writes:
I am happy to mark things as tested if that helps out :-)
It will, but also helpful will be letting us know when things break.
As an example, TranslateWiki admins helpfully marked code that introduced problems FIXME and this really helped me learn, for example, to keep my own code E_STRICT clean.
If you are running trunk on a production wiki, then do yourself a favor and participate in code review (http://www.mediawiki.org/wiki/Special:Code/MediaWiki/). Note that I'm not asking you to mark code “OK”. I think the most helpful thing for us and you would be to let us know when code causes problems.
“svn annotate” is your friend in tracking down problems and finding the culprit.
Mark.
Huib Laurens wrote:
Hi,
I am happy to mark things as tested if that helps out :-)
But I need to say the trunk seems to be more instable... I use trunk for 3 production sites and its updated every morning... and seems to break every two days... Normally it would break once in the month...
Best,
Huib
Those are bad news. Any specific pattern in the breakages? Something for which we could add tests, perhaps?
On Wed, Mar 2, 2011 at 10:48 AM, Huib Laurens sterkebak@gmail.com wrote:
I am happy to mark things as tested if that helps out :-)
But I need to say the trunk seems to be more instable... I use trunk for 3 production sites and its updated every morning... and seems to break every two days... Normally it would break once in the month...
I'd like to add something to the great suggestions that have already come in. Since you're already doing daily updates and seeing the breakage, you already know within 50-60 revisions of where the breakage is likely to be. It may only take 6-7 extra checkouts to narrow it down to a specific revision doing a binary search through the previous day's checkins, even without an informed guess on which file or function is the likely suspect and including extensions. If you narrow it down to just core (not extensions), that's roughly 20 checkins a day, so 4-5 extra checkouts should do the trick. Any little bit of debugging like this is greatly appreciated!
Rob
Rob Lanphier wrote:
On Wed, Mar 2, 2011 at 10:48 AM, Huib Laurens sterkebak@gmail.com wrote:
I am happy to mark things as tested if that helps out :-)
But I need to say the trunk seems to be more instable... I use trunk for 3 production sites and its updated every morning... and seems to break every two days... Normally it would break once in the month...
I'd like to add something to the great suggestions that have already come in. Since you're already doing daily updates and seeing the breakage, you already know within 50-60 revisions of where the breakage is likely to be. It may only take 6-7 extra checkouts to narrow it down to a specific revision doing a binary search through the previous day's checkins, even without an informed guess on which file or function is the likely suspect and including extensions. If you narrow it down to just core (not extensions), that's roughly 20 checkins a day, so 4-5 extra checkouts should do the trick. Any little bit of debugging like this is greatly appreciated!
Rob
That would be nice, of course. But we shouldn't demand to debug the problem to people which is already doing us a favour by detecting them. Just a bug report like: "Foo broke in last 24h" "Updating from r12345 to r54321 the foo links are now like bar." would be extremely useful, as that bug will be easy to resolve just in time (even if it's a revert), as opposed to handling a bug which has been there for a long time. Those 'fast' bugs are also good candidates for the weekly sprints.
On Wed, Mar 2, 2011 at 4:35 PM, Platonides Platonides@gmail.com wrote:
Rob Lanphier wrote:
I'd like to add something to the great suggestions that have already come in. Since you're already doing daily updates and seeing the breakage, you already know within 50-60 revisions of where the breakage is likely to be. It may only take 6-7 extra checkouts to narrow it down to a specific revision doing a binary search through the previous day's checkins, even without an informed guess on which file or function is the likely suspect and including extensions. If you narrow it down to just core (not extensions), that's roughly 20 checkins a day, so 4-5 extra checkouts should do the trick. Any little bit of debugging like this is greatly appreciated!
That would be nice, of course. But we shouldn't demand to debug the problem to people which is already doing us a favour by detecting them. Just a bug report like: "Foo broke in last 24h" "Updating from r12345 to r54321 the foo links are now like bar." would be extremely useful, as that bug will be easy to resolve just in time (even if it's a revert), as opposed to handling a bug which has been there for a long time.
Oh, sure. It could be that someone like Huib identifies a problem range, and someone else actually narrows it down to a specific commit. I guess I should have explicitly cast the net out wider for the second part. The second part doesn't require a deep knowledge of the codebase, so it's an activity that someone just getting started can do to help out.
Those 'fast' bugs are also good candidates for the weekly sprints.
Yup, they certainly are.
Rob
On 03/03/2011 02:35 AM, Platonides wrote:
Rob Lanphier wrote:
On Wed, Mar 2, 2011 at 10:48 AM, Huib Laurenssterkebak@gmail.com wrote:
I am happy to mark things as tested if that helps out :-)
But I need to say the trunk seems to be more instable... I use trunk for 3 production sites and its updated every morning... and seems to break every two days... Normally it would break once in the month...
I'd like to add something to the great suggestions that have already come in. Since you're already doing daily updates and seeing the breakage, you already know within 50-60 revisions of where the breakage is likely to be. It may only take 6-7 extra checkouts to narrow it down to a specific revision doing a binary search through the previous day's checkins, even without an informed guess on which file or function is the likely suspect and including extensions. If you narrow it down to just core (not extensions), that's roughly 20 checkins a day, so 4-5 extra checkouts should do the trick. Any little bit of debugging like this is greatly appreciated!
That would be nice, of course. But we shouldn't demand to debug the problem to people which is already doing us a favour by detecting them. Just a bug report like: "Foo broke in last 24h" "Updating from r12345 to r54321 the foo links are now like bar." would be extremely useful, as that bug will be easy to resolve just in time (even if it's a revert), as opposed to handling a bug which has been there for a long time. Those 'fast' bugs are also good candidates for the weekly sprints.
Agreed. Also, if logging in to bugzilla and filing a report seems like too much hassle, just dropping in at #mediawiki and asking "Hey, who broke feature X since yesterday?" is often enough. (Of course, you should still follow up with a bug report if no-one comes up with an immediate solution, but IME pretty often someone does.)
In IRC channel #mediawiki-i18n the PHP warnings and notices of translatewiki.net are reported live. We usually paste relevant issues in #mediawiki, but those interested in them *now* are most welcome. At the moment we are trying to track down a memory usage issue.
We update translatewiki.net to trunk multiple times a day, and update events are also announced on-channel. Configuration of the wiki is also open[1].
Siebrand
[1] https://translatewiki.net/wiki/Configuration
Op 03-03-11 00:59 schreef Rob Lanphier robla@wikimedia.org:
On Wed, Mar 2, 2011 at 10:48 AM, Huib Laurens sterkebak@gmail.com wrote:
I am happy to mark things as tested if that helps out :-)
But I need to say the trunk seems to be more instable... I use trunk for 3 production sites and its updated every morning... and seems to break every two days... Normally it would break once in the month...
I'd like to add something to the great suggestions that have already come in. Since you're already doing daily updates and seeing the breakage, you already know within 50-60 revisions of where the breakage is likely to be. It may only take 6-7 extra checkouts to narrow it down to a specific revision doing a binary search through the previous day's checkins, even without an informed guess on which file or function is the likely suspect and including extensions. If you narrow it down to just core (not extensions), that's roughly 20 checkins a day, so 4-5 extra checkouts should do the trick. Any little bit of debugging like this is greatly appreciated!
Rob
On 03/03/11 05:48, Huib Laurens wrote:
But I need to say the trunk seems to be more instable... I use trunk for 3 production sites and its updated every morning... and seems to break every two days... Normally it would break once in the month...
Just don't use trunk in production. It seems like an easy fix to me.
There's no way we can have post-commit review without having things break occasionally. It doesn't matter how good our testing framework is.
I run several public wikis, for friends and family. I certainly don't put trunk on them, I use the stable release tags.
-- Tim Starling
2011/3/2 Mark A. Hershberger omhershberger@wikimedia.org:
To help with code review, Roan introduced “sign-offs” for developers who are not as familiar with the MediaWiki code base. I'm sure he'll correct me if I'm wrong, but I would like to encourage any developer who isn't ready to mark code “OK” to use the sign-off feature — to indicate that they've tested or inspected the code.
That's exactly what I had in mind when writing it, yes. Regardless of your level of experience, you can mark commits as 'tested' if you've confirmed it does what it claims to do. If you have some MediaWiki experience but don't quite consider yourself a code reviewer or an expert in the specific area a commit touches, you can read the diff of a commit and mark it as 'inspected' to signify you've read the diff and it looks fine to you.
If you test something and it's broken, or you read something and see something bad, add a comment to the revision and set its status to fixme. Before you do this, though, check the list of follow-up revisions to see if the issue you're about to report has been fixed already.
The sign-off system is designed specifically so that you absolutely don't have to worry about being wrong or not having enough experience. All a sign-off will do is add an entry saying "user X says this looks good", and multiple users can sign off on a commit as tested, inspected, or both, independently of each other. Sign-offs don't change the status (new/fixme/ok/...) of a revision in any way, but they will help the reviewer who ends up doing the final review by telling them how many eyes have looked at a particular piece of code.
You need to be in the coder group on mediawiki.org to perform (almost?) all of these actions. If you don't have coder rights, don't worry: they're given out very liberally. Any coder can promote other users to coders, and the bar is generally not much higher than showing good faith. Having commit access usually helps, but I don't believe it's required.
So the bottom line is, anyone can help, and everyone is encouraged to do so.
Roan Kattouw (Catrope)
P.S.: For the old hats, this is probably mostly stuff they already know, but I figured this post could serve well as a general introduction about how to get involved in code review as a relatively inexperienced developer or end user, so I decided to take it that way.
Mark A. Hershberger wrote:
As Ashar pointed out this week, we've fallen behind in code review. On Robla's page (http://toolserver.org/~robla/crstats/crstats.html) you can see that commits marked “new” is beginning to edge up again.
To help with code review, Roan introduced “sign-offs” for developers who are not as familiar with the MediaWiki code base. I'm sure he'll correct me if I'm wrong, but I would like to encourage any developer who isn't ready to mark code “OK” to use the sign-off feature — to indicate that they've tested or inspected the code.
If you're running trunk in your testing or (heaven forfend!) production, please try to see if you're exercising new code and give us feedback by marking the sign-off as “tested”. This is one of the best ways to get acquainted with the code base — if you miss something in your testing, we'll be sure to let you know!
I don't currently have time for code review, but I'll try to report bugs for all issues I encounter.
FYI I'm one of those "crazy" people running a small production website[1] off trunk. I only update the site every few weeks, because I want to test the changes locally before they go live, but I do read the commit messages from mediawiki-cvs almost daily, so I know of any urgent issues.
(Small wish: it would be very helpful if every commit message referencing a bug would also include the one-line summary of the bug; that makes it much easier to quickly determine what the bug is about and if a bug is relevant for a specific environment. Thanks to all developers that already add this information!)
Of course, if you've been reviewing code, THANK YOU and keep up the good work.
I want to add my THANK YOU! To both developers and code reviewers: you do a very valuable job!
Best regards Thomas Bleher
On Fri, Mar 4, 2011 at 10:09 AM, Thomas Bleher ThomasBleher@gmx.de wrote:
(Small wish: it would be very helpful if every commit message referencing a bug would also include the one-line summary of the bug; that makes it much easier to quickly determine what the bug is about and if a bug is relevant for a specific environment. Thanks to all developers that already add this information!)
I fully agree with this. A commit summary should be more than "Fixed bug x" or "Follow-up rX: fixed some stuff" At minimum you can include the bug summary in the commit message, or some other descriptive line of what your commit purpose was.
Bryan
On Sat, Mar 05, 2011 at 10:54:20AM +0100, Bryan Tong Minh wrote:
On Fri, Mar 4, 2011 at 10:09 AM, Thomas Bleher ThomasBleher@gmx.de wrote:
(Small wish: it would be very helpful if every commit message referencing a bug would also include the one-line summary of the bug; that makes it much easier to quickly determine what the bug is about and if a bug is relevant for a specific environment. Thanks to all developers that already add this information!)
I fully agree with this. A commit summary should be more than "Fixed bug x" or "Follow-up rX: fixed some stuff" At minimum you can include the bug summary in the commit message, or some other descriptive line of what your commit purpose was.
I agree with the problem, here's an alternate solution that doesn't require people to modify their commenting behavior.
Post-process the commit summary. Decorate 'bug x' with 'bug x: "<bug summary>"' which could also be a hyperlink to the bug report.
wikitech-l@lists.wikimedia.org