Neilk wrote:
At the risk of being impolite -- our code review tool is not that nice. (I don't expect that anyone who worked on it would even disagree with me here.)
On 24/03/11 06:47, MZMcBride wrote:
It's only impolite if you criticize the code review tool without being constructive. What specifically do you not like about the current code review tool? And have you filed bugs about getting these issues addressed?
Neilk is realist. Either we bring more developers in the system or we drop it and reuse another system already having some developers. For example, we are not developing our own bug tracker or webmail interfaces. We reuse code from others just like other reuse our Wiki code.
I would name a few issues with our CR system: - does not known about branches - lacks a manual merging system - lacks an automatic merging system (something like: if rev and follow up got 4 sign up, merge them all in release branch). - a rev + its follow up could be grouped. We will then review the group as a whole instead of individual revisions. - I still have not figured out how to filter by author AND path - comment system should be liquid thread based - the diff is useless (I use a local tool) - still have to rely on local tools for merging, reverting, blaming - not integrated with bugzilla
There are lot of good points though!
On Thu, Mar 24, 2011 at 5:44 PM, Ashar Voultoiz hashar+wmf@free.fr wrote:
Neilk is realist. Either we bring more developers in the system or we drop it and reuse another system already having some developers.
It's sitting there in SVN, nothing is stopping people from working on it, In fact Sam and Chad might like the help, But your arugment that having more developers(/man power) != better working systems.
I would name a few issues with our CR system: ....
- I still have not figured out how to filter by author AND path
Have you asked anyone for help? Although I think it may be broken based on [[Bugzilla:26195]]
- comment system should be liquid thread based.
There is a bug and plans for this (Pending the LQT backend rewrite)
- the diff is useless (I use a local tool)
How so? Have you submitted a bug so people know about this?
- still have to rely on local tools for merging, reverting, blaming
Because those are SVN actions that need to be done as a SVN user and our SVN -> Wiki user system is kinda lacking from my understanding.
- not integrated with bugzilla
What parts could be improved by having it more intergrated? -Peachey
On 24/03/11 09:41, K. Peachey wrote: <snip>
It's sitting there in SVN, nothing is stopping people from working on it, In fact Sam and Chad might like the help, But your arugment that having more developers(/man power) != better working systems.
I am a dev with commit access and could probably sync the patches on the live site (the day I figure out if I have commit access to the production branch). My personal issue is that I am lacking the time to think about the problem, design a solution, implements it and tests it. Since I have workarounds, I focus on small tasks or things that really matter to me and to my wife (she is my first tester / user).
Anyway, I was answering to MZMcBride in the context of things I do not like in our code review software. Those issues highlight the reviewing paradigm behind the tool and Neil Kandalgaonkar explained this way better than I would ever be able to do.
(still I like our code review software since it feats our actual needs)
<snip issues>
On Thu, Mar 24, 2011 at 3:44 AM, Ashar Voultoiz hashar+wmf@free.fr wrote:
- I still have not figured out how to filter by author AND path
Special:Code/MediaWiki/author/hashar?path=/trunk/phase3
or if you only want unreviewed revs:
Special:Code/MediaWiki/status/new?author=hashar&path=/trunk/phase3
The UI still sucks for it, but support *is* there.
-Chad
On 03/24/2011 12:44 AM, Ashar Voultoiz wrote:
On 24/03/11 06:47, MZMcBride wrote:
It's only impolite if you criticize the code review tool without being constructive. What specifically do you not like about the current code review tool?
I agree with most of what Ashar said. Lack of branching, merging, blame, only semi-integrated with bug tracking.
And have you filed bugs about getting these issues
addressed?
My guess, which could be wrong, is that it would be cleaner to move to a new tool, or new combination of tools. I'm not sure which yet.
As for Special:Code et al, I disagree with the whole paradigm around which it is based -- effectively it just annotates revisions in SVN. This means that you need this "outer circle" of checkins that don't really count. And then there's the "inner circle" where some esteemed developer who's done a lot of cool things for MediaWiki, gets, as their reward, to constantly deal with other people's patches that they may not care about.
I believe that paradigm is broken because the incentives are backwards. Average developers are frustrated because no matter HOW much energy and commitment they have, they can't make code review go faster. The inner circle of production-branch committers are also frustrated because it's up to them to deal with all the pain of merging. Suddenly getting a feature production ready is THEIR problem, and sometimes super-productive people like Roan have found it easier to just rewrite it from scratch. Otherwise it's much easier to just ignore the backlog for long periods.
My ideal is a code review tool that:
- Allows us to deploy trunk. At any time. Eliminate the production branch. Any developer in the world should be able to work on the code we actually have in production without having to decide between trunk and a production branch.
- Allows the "outer circle" developer to take things into their own hands. They can check out the code that is develop a changelist, or set of changes, that is carefully crafted to be applied to trunk. If they are screwing up, they should get instant feedback, not six months later.
- Does not unduly penalize the "inner circle" developer. Give them a constant stream of light duties, not a soul-crushing marathon of heavy duties once a year.
I admire the code review paradigm at Google, which does all that, but which is regrettably based on tools that are not all available freely. So I don't have a 100% solution for you yet. I've talked informally with RobLa about this but I didn't have anything really solid to bring to the community. (In early 2010 I started looking at ReviewBoard but then I realized that the MediaWiki community had their own tool and I figured I should understand that first.)
There's been some confusion, so perhaps I have not been clear that I'm referring to a totally different paradigm of code review, where the code to be reviewed isn't even in subversion. Essentially the developers would pass around something like patches. Some systems make this work over email, but it's easier if it's tracked in a web based system. As developers work on the patch, the change log message or other metadata about the patch is annotated with developer comments. Sometimes you bring in more developers -- maybe there's some aspect that someone else is better situated to understand.
This process goes on until the patch is deemed worthy. Then, and only then, does it get committed to an authoritative repository, authored by some developer and annotated as reviewed by one or more other developers. (Emergency patches get a "review this later" tag).
Now, when code review happens before committing, you get the benefit that all non-emergency checkins have been at least looked at by somebody. Personally I believe this should be happening for everybody, even experienced developers. Even they make mistakes sometime, and if not, then other developers learn how to code like them by reading their changes more intently.
But then your code review system has to reinvent several wheels and you annoy the developer by making them do fresh checkouts all the time.
Git can do a lot of that in a much cleaner way, so I expect Gerrit might be an even better solution.
Anyway, this is all vague and clearly I'm talking about radical changes to the entire MediaWiki community. But I believe it would help quite a bit.
Maybe I should work on it a bit more and present it on a wiki page somewhere, as well as in person in Berlin in May?
Neilk is realist. Either we bring more developers in the system or we drop it and reuse another system already having some developers. For example, we are not developing our own bug tracker or webmail interfaces. We reuse code from others just like other reuse our Wiki code.
I would name a few issues with our CR system:
- does not known about branches
- lacks a manual merging system
- lacks an automatic merging system (something like: if rev and follow
up got 4 sign up, merge them all in release branch).
- a rev + its follow up could be grouped. We will then review the group
as a whole instead of individual revisions.
- I still have not figured out how to filter by author AND path
- comment system should be liquid thread based
- the diff is useless (I use a local tool)
- still have to rely on local tools for merging, reverting, blaming
- not integrated with bugzilla
There are lot of good points though!
On 03/24/2011 10:13 AM, Neil Kandalgaonkar wrote:
Anyway, this is all vague and clearly I'm talking about radical changes to the entire MediaWiki community. But I believe it would help quite a bit.
Maybe I should work on it a bit more and present it on a wiki page somewhere, as well as in person in Berlin in May?
I've added your idea to the list of possible topics to talk about/work on in Berlin in May:
http://www.mediawiki.org/wiki/Berlin_Hackathon_2011#Topics
Yeah, maybe in Berlin you could briefly summarize your proposal and we could hash out some next steps?
best, Sumana
2011/3/24 Neil Kandalgaonkar neilk@wikimedia.org:
- Allows us to deploy trunk. At any time. Eliminate the production
branch. Any developer in the world should be able to work on the code we actually have in production without having to decide between trunk and a production branch.
You're basically arguing for Linux-style pre-commit code review where people e-mail patches back and forth. However, as long as we still have SVN, that means that these pre-committed patches ARE NOT VERSIONED, let alone necessarily public.
I believe this is bad because: 1) Keeping track of patches, collaborating on a larger feature, etc. become harder (no benefits of a VCS) 2) Resolving conflicts between patches is done by reviewers when they apply them instead of being conveniently outsourced to the author-committers 3) If review capacity is low, patches don't get committed, their authors bug reviewers a few times, give up, get demotivated and leave the project
I think this workflow could work with a DVCS with Git, but I strongly oppose implementing it while we're still using a centralized VCS like Subversion.
Instead, let me outline my recollection of how code review and deployment worked back when I joined this project, and explain how I think this process can be resurrected. This was all a long time ago and I was fairly new to MW, so please correct me where I'm wrong.
* Someone commits something * A notification is sent to the mediawiki-cvs list. This is still the case, except back then more than a few people were subscribed to it, and traffic wasn't as high * Optionally, a mediawiki-cvs reader (the usual suspects being Brion, Tim and Rob Church) reads the diff and notices something is wrong with it. They reply to the commit notification, citing parts of the diff inline and raising their objections. This reply is automatically sent to wikitech-l (we didn't have the CodeReview extension yet), which committers are expected to be subscribed to. A discussion about the commit takes place, possibly leading to followup commits * The next Monday, Brion smoketests HEAD. If he finds breakage, he tracks down the offending revision(s) and reverts things until everything seems to work. ("Keep trunk runnable" was taken really seriously, and we mostly had a revert->reapply cycle instead of a fixme->followup cycle: it was perfectly acceptable to revert broken things if they couldn't be fixed in 5 minutes, especially if you were as busy as Brion.) * In addition to smoketesting, Brion also reviews all revisions to phase3 and WMF-run extensions (with the level of commit activity we had back then, this wasn't an unreasonable job for one person to do on one day) and reverts things as appropriate. * trunk is now in a state where it seems to run fine on Brion's laptop. Brion deploys trunk to testwiki, tests a bit more, then deploys to the cluster
As you know, our workflow has become a bit different over the years. At some point, CodeReview was written to make revision discussions nicer and to provide status fields so Brion could outsource some review work. Later, the WMF branch was introduced to not only track live hacks and WMF-specific changes, but also to remove the dependency on a runnable trunk.
The reason this workflow resulted in frequent deployments of trunk was that review was that review was always close to HEAD (never behind more than about 2 weeks). The reason it broke down in the end was that Brion kept having less time to review more things, but that doesn't mean we can't make it work again by having more than one reviewer. I think the following conditions are necessary for this to happen: * We need to have multiple reviewers (duh) * Every reviewer needs to budget time for code review, and they need to not get tangled up in other obligations to a degree where they can't spend enough time on review. This is largely a management thing * It needs to be clear who is responsible for reviewing what. This doesn't need to be set in stone, but we have to avoid a situation where revisions aren't reviewed because no one feels responsible. This can be accomplished by agreeing on review assignments based on e.g. path/subsystem, and having some manager-like person (possibly an EPM) monitor the process and make sure nothing gets left by the wayside. If conventional review assignments leave too much ambiguity, additional criteria can be introduced, e.g. the day of the week something was committed. More on this in a minute * There needs to be a clear expectation that commits are generally reviewed within a certain time (say, a week) after having been committed. The same manager-like person should also be keeping an eye on this and making sure overdue revs are reviewed pronto * We need to set a clear policy for reverting problematic revisions (fixme's) if they aren't addressed quickly enough (again, let's say within a week). Currently we largely leave them be, but I think we should go back to something more decisive and closer to the "keep trunk runnable, or else Brion kicks your ass" paradigm and make it a bit more formal this time
As for the who-reviews-what criteria, I think they should be designed to ensure most things will get reviewed without EPM intervention being necessary, while not getting in the way of getting things done. I think the best way to do this would be to first assign revisions based on path or subsystem (e.g. Tim for parser revs, Sam or me for API revs, etc.) then for everything that doesn't neatly fall into one subsystem someone's an expert on or can't easily be found by a path search, we'd divide the week in a number of chunks and make one reviewer responsible for reviewing the 'misc' revs in each chunk (or throwing them to someone else if it's over their head). If this is timed in a way that corresponds nicely with our reviewers' availability (e.g. schedule around school for people like me, schedule Tim for Sunday cause that's his Monday, etc.) we could have a system where the median time to review is like 24-48 hours.
These are my thoughts, and I'm definitely interested in talking about this in Berlin, as Sumana suggested.
Roan Kattouw (Catrope)
On Thu, Mar 24, 2011 at 2:00 PM, Roan Kattouw roan.kattouw@gmail.com wrote:
- Resolving conflicts between patches is done by reviewers when they
apply them instead of being conveniently outsourced to the author-committers
If there's a conflict, the reviewer can ask the patch submitter to submit a new version with the conflict resolved. I had this happen to me for one of the patches I submitted to Mozilla. I was then asked to submit an interdiff to highlight what had changed in my new version, so that the reviewer didn't have to re-review the parts of the patch that didn't change. Review-then-commit systems tend to place much more of a burden on the submitter and less on the reviewer.
- If review capacity is low, patches don't get committed, their
authors bug reviewers a few times, give up, get demotivated and leave the project
This is the major issue. We need to get review sorted out on a organizational basis before we start considering shaking anything up. At Mozilla, the way it works (in my experience) is you ask a suitable person for review, and they reliably respond to you within a few days. I'm sure that for large patchsets it's harder than for the trivial patches I submit, but the system clearly works. We need to have a pool of reviewers who are responsible for setting aside their other responsibilities to whatever extent is necessary to get new code adequately reviewed (which could just mean reverting it if it has too many problems).
I think Roan hits it on the nose. Most of the problems Ashar and Neil raise are flaws in our code review process, not flaws in the tools we use *to do* code review. I actually think that CodeReview works quite well, **for the system we currently use**. I think many of us agree that, one way or another, *that system* has major flaws.
The fact that one discussion has quickly fragmented into fresh threads on *all* of the 'big three' (code review workflow, VCS, and release cycle) illustrates how intimately connected all these things are. It makes no sense to choose a VCS which doesn't support our code review workflow; if our code review is worthless if it does not support a coherent release cycle; and the release workflow (and the to-freeze-or-not-to-freeze question) has a dependency on the VCS infrastructure.
Ultimately, though, it's a mistake to think of any of these issues as technical questions: they are **social** problems. We have to choose the *mindset* which works for us as individuals, as a group and as a charitable Foundation. Currently our development mindset is of the Wild West: pretty much everyone works alone, on things which either interest them or which they are being paid to be interested in, and while everyone is responsible enough to fix their own bugs, our focus is on whatever we, individually, are doing rather than the finished product, because the product only *becomes* finished once every 6 months or so. The only reasons now that we keep trunk broadly runnable are a) it makes it easier for us to continue our own development, and b) the TWN people shout at us whenever we break it.
I'm not, let me be clear, saying that said 'Wild West' mindset is at all a bad thing, it is very open and inclusive and it keeps us from the endless trivial discussions which lead to cynicism and then flames in more close-knit communities. But as Roan says, it is *not* the only mindset, and the alternative is one which is more focussed at every stage on how changes affect a continuously-finished product. We know the regime which is at the other end of the scale: the Linux kernel's universal pre-commit review, which I'm going to suggest we call the 'Burnt Offering' approach to coding as patches are worked, reworked, and inevitably reduced in number before being presented for divine approval. That has clear advantages, in ensuring very high code quality and probably improving *everyone's* coding skills, but also the disadvantages Roan mentions.
The smoketest-trunk-every-week development model, which defies being given a crass analogy, is somewhere in the middle, and I think that's closer to where we need to be. If we made an absolute policy of scapping to the WMF cluster once a week, every week, it would force a shift in our mindset (arguably a shift *back*), but not one that's seen as an artificial limitation. No one will begrudge a release manager reverting changes on Tuesday afternoon which people agree will not be fixed in time for a Wednesday scap, while the same release manager spending Tuesday *not* merging changes for the exact same reason is seen in a much more negative light. We retain people's ability to make rapid and immediate changes to a bleeding-edge trunk, but still ensure that we do not get carried away, as we did for 1.17 and are still merrily doing for 1.18, on a tide of editing which is not particularly focussed or managed (witness the fact that out of the 15,000 revisions in 1.17, we can point out only about three 'headline' features).
There are implementation questions to follow on from whichever workflow regime we move towards: for the weekly-scap process we need to find a replacement for Brion and his cluebat which is as reliable and efficient as he was; for a Linux-style system we need to sort out how to ensure that patches get the review that they need and that it doesn't just kill our development stone dead; and even to continue in the Wild West we need to sort out how to stop traceing out the Himlayas with the graph of unreviewed commits and actually get our damn releases out to prove that the system can work. My main point is that *any* technical discussion, about SVN/Git, about CodeReview or its alternatives, even about Bugzilla/Redmine, is premature unless we have reached an adequate conclusion about the social aspects of this combined issue. Because Git does not write code, nor does CodeReview or Bugzilla. *We* write MediaWiki, and we could in principle do it in notepad or pico if we wanted (some of us probably do :-D). The most important question is what will make us, as a group, more effective at writing cool software. Answers on a postcard.
--HM
On Thu, Mar 24, 2011 at 9:27 PM, Happy-melon happy-melon@live.com wrote:
I think Roan hits it on the nose. Most of the problems Ashar and Neil raise are flaws in our code review process, not flaws in the tools we use *to do* code review. I actually think that CodeReview works quite well, **for the system we currently use**.
I think it works very poorly, independent of our code review process:
* The discussion system is buggy and unpredictably mangles markup (it probably shouldn't have attempted to support wikitext in the first place . . .) * It doesn't allow line-by-line review of patches * It doesn't allow sensible configuration of e-mail notification * It doesn't integrate with version control beyond just reading it (e.g., it could support merging to a branch from within the web UI) * It doesn't integrate with bug tracking at all
These are only the things I can think of off the top of my head without having even *used* decent code review software. I'm pretty sure that if I had used something like Gerrit a lot, I'd recognize a lot more drawbacks. The bottom line is, trying to write our own code review software is just a bad idea in the long run.
Ultimately, though, it's a mistake to think of any of these issues as technical questions: they are **social** problems. We have to choose the *mindset* which works for us as individuals, as a group and as a charitable Foundation.
There are technical problems here as well. The technical advantages of moving to a DVCS can be separated completely from the social advantages of the new code review paradigms we could adopt after doing so. Moving from SVN to git would be a step forward in a lot of ways even if we kept the code review and deployment process basically unchanged. But it's also a prerequisite for adopting certain types of code review, or at least it would make adopting those types of code review much easier, so we should really talk about switching to git before we consider a review-then-commit system.
We know the regime which is at the other end of the scale: the Linux kernel's universal pre-commit review, which I'm going to suggest we call the 'Burnt Offering' approach to coding as patches are worked, reworked, and inevitably reduced in number before being presented for divine approval. That has clear advantages, in ensuring very high code quality and probably improving *everyone's* coding skills, but also the disadvantages Roan mentions.
The real distinguishing feature of Linux development isn't pre-commit review, it's that it's pull-only and thus completely individual-oriented. Linus Torvalds personally decides what gets into the official Linux kernel, and no one else has any actual say (beyond trying to persuade him). He mostly delegates to maintainers, who in turn mostly run their parts of the kernel as fiefdoms as well. This approach is idiosyncratic, and closely related to the fact that Linux is produced by dozens of independent organizations with no central organizational oversight.
I don't think we should be seriously contemplating the Linux model for MediaWiki. The overwhelming majority of MediaWiki development is done by either Wikimedia employees, or volunteers who are closely connected to Wikimedia employees. MediaWiki isn't an individual's project, it's Wikimedia's project, so a totally decentralized version control process wouldn't match the reality of how development works.
I continue to suggest that we look at a process more like Mozilla's. Like MediaWiki, Mozilla's projects are developed under the central control of a not-for-profit organization (modulo wholly-owned for-profit subsidiaries that exist for tax reasons) committed to openness and community participation. It's much more accessible to new contributors than either MediaWiki or Linux development, and I can speak to that personally as someone who's submitted code to both MediaWiki and Mozilla. (Not to Linux, but the reputation of Linux development is consistently scary enough that I don't think I need personal experience . . .)
The smoketest-trunk-every-week development model, which defies being given a crass analogy, is somewhere in the middle, and I think that's closer to where we need to be. If we made an absolute policy of scapping to the WMF cluster once a week, every week, it would force a shift in our mindset (arguably a shift *back*), but not one that's seen as an artificial limitation. No one will begrudge a release manager reverting changes on Tuesday afternoon which people agree will not be fixed in time for a Wednesday scap, while the same release manager spending Tuesday *not* merging changes for the exact same reason is seen in a much more negative light. We retain people's ability to make rapid and immediate changes to a bleeding-edge trunk, but still ensure that we do not get carried away, as we did for 1.17 and are still merrily doing for 1.18, on a tide of editing which is not particularly focussed or managed (witness the fact that out of the 15,000 revisions in 1.17, we can point out only about three 'headline' features).
The key problem with this approach is that you retain the notion of people who have "commit access" -- who are in the privileged position of being able to submit their code without prior review. This creates an artificial barrier to new users' contributions. They have to use an entirely separate process to submit code, and it's very possible that even if we get review of in-trunk code running smoothly, patches on Bugzilla will remain neglected. After all, established contributors aren't submitting patches on Bugzilla for review, so the complaints about Bugzilla review problems will be few and quiet. Only a unified review system, where new contributors and core developers submit their code through the same process, will maximize accessibility to new contributors.
Mozilla uses Bugzilla for this, which is frankly a pain in the neck, but it works for them. Gerrit looks much better, although I have yet to see it really in action.
Of course, under this system we'd still have people with something resembling commit access. Mozilla does, unlike Linux. But commit access would only be for if you've verified something has gone through the proper review procedure, or for emergencies. So it would be a bureaucratic right that makes your life slightly easier, not something that lets you get code accepted more easily.
My main point is that *any* technical discussion, about SVN/Git, about CodeReview or its alternatives, even about Bugzilla/Redmine, is premature unless we have reached an adequate conclusion about the social aspects of this combined issue. Because Git does not write code, nor does CodeReview or Bugzilla.
No, but git is a much better tool than SVN overall. We should switch to it even if we have no intent of changing our development model. Since a lot of us do want to try out changes to our development model, that only strengthens the case for switching, since git is much more flexible and can be more easily adapted to different development models than SVN.
All this should only be after we get back to regular code deployment, though. That's certainly the most important thing. First get code review working with regular deployment, then switch to git, then switch to review-then-commit, is what I'd like to see. (Although my recent involvement with MediaWiki development has been so minimal that what I'd like to see probably doesn't matter much, to me or anyone else. I do hope to become more active again, but probably not for a couple of years at least . . .)
On Fri, Mar 25, 2011 at 2:27 AM, Happy-melon happy-melon@live.com wrote:
*We* write MediaWiki, and we could in principle do it in notepad or pico if we wanted (some of us probably do :-D).
I don't think anybody writes in notepad, as notepad inserts UTF byte order marks at the beginning of a text file ;) For the rest of your mail, +1
Bryan
On 03/24/2011 08:00 PM, Roan Kattouw wrote:
- We need to set a clear policy for reverting problematic revisions
(fixme's) if they aren't addressed quickly enough (again, let's say within a week). Currently we largely leave them be, but I think we should go back to something more decisive and closer to the "keep trunk runnable, or else Brion kicks your ass" paradigm and make it a bit more formal this time
This made me realize something that's only tangentially related to the existing thread, namely that we're currently using the "fixme" status in Code Review for two different kinds of commits:
1. commits that are broken and need to be fixed or reverted ASAP, and 2. commits that do more or less work, but need some followup work.
An example of the first kind of commit would be something that throws PHP fatal errors on a substantial fraction of page views. An example of the second kind might be something as minor as forgetting to update RELEASE_NOTES.
Of course, there's also a wide range of shades of gray between these two extremes, such as changes that work most of the time but break some unusual setups or use cases. Still, I do think that most "fixme" commits can be fairly cleanly assigned to one or the other of these categories, simply by asking oneself "Can I run a usable wiki with this code as it is?"
I think it might be a good idea to split these two cases into separate states. My suggestion, off the top of my head, would be to leave "fixme" for the latter and add a new "broken" status for the former.
Of course, we really shouldn't expect to have any "broken" commits in CR at any given time, since they really should be reverted and marked as such by the first person who can do so. But I think that being able to mark a commit as broken and needing a revert, even if you can't revert it yourself just then for some reason (no time, no svn access, whatever), could be a good idea. It would also make it less likely for such commits to get lost (even temporarily) among the less urgent "fixme"s.
(Ps. I'd also like to note that I very much agree with what Roan wrote in general, and I'd very much like to see us going back to something like the system he proposed. +1.)
On Fri, Mar 25, 2011 at 10:30 AM, Ilmari Karonen nospam@vyznev.net wrote:
On 03/24/2011 08:00 PM, Roan Kattouw wrote:
- We need to set a clear policy for reverting problematic revisions
(fixme's) if they aren't addressed quickly enough (again, let's say within a week). Currently we largely leave them be, but I think we should go back to something more decisive and closer to the "keep trunk runnable, or else Brion kicks your ass" paradigm and make it a bit more formal this time
This made me realize something that's only tangentially related to the existing thread, namely that we're currently using the "fixme" status in Code Review for two different kinds of commits:
1. commits that are broken and need to be fixed or reverted ASAP, and 2. commits that do more or less work, but need some followup work.
An example of the first kind of commit would be something that throws PHP fatal errors on a substantial fraction of page views. An example of the second kind might be something as minor as forgetting to update RELEASE_NOTES.
Of course, there's also a wide range of shades of gray between these two extremes, such as changes that work most of the time but break some unusual setups or use cases. Still, I do think that most "fixme" commits can be fairly cleanly assigned to one or the other of these categories, simply by asking oneself "Can I run a usable wiki with this code as it is?"
I think it might be a good idea to split these two cases into separate states. My suggestion, off the top of my head, would be to leave "fixme" for the latter and add a new "broken" status for the former.
Of course, we really shouldn't expect to have any "broken" commits in CR at any given time, since they really should be reverted and marked as such by the first person who can do so. But I think that being able to mark a commit as broken and needing a revert, even if you can't revert it yourself just then for some reason (no time, no svn access, whatever), could be a good idea. It would also make it less likely for such commits to get lost (even temporarily) among the less urgent "fixme"s.
(Ps. I'd also like to note that I very much agree with what Roan wrote in general, and I'd very much like to see us going back to something like the system he proposed. +1.)
+1 to everything Roan said and +1 to everything above.
-Chad
"Chad" innocentkiller@gmail.com wrote in message news:AANLkTi=o1xrb3RnwBK3hVC5fKi2Xi9n+CS5qBh6nWtwM@mail.gmail.com...
On Fri, Mar 25, 2011 at 10:30 AM, Ilmari Karonen nospam@vyznev.net wrote:
On 03/24/2011 08:00 PM, Roan Kattouw wrote:
- We need to set a clear policy for reverting problematic revisions
(fixme's) if they aren't addressed quickly enough (again, let's say within a week). Currently we largely leave them be, but I think we should go back to something more decisive and closer to the "keep trunk runnable, or else Brion kicks your ass" paradigm and make it a bit more formal this time
I think it might be a good idea to split these two cases into separate states. My suggestion, off the top of my head, would be to leave "fixme" for the latter and add a new "broken" status for the former.
+1 to everything Roan said and +1 to everything above.
-Chad
Definite +1 to this as well. I don't think that 'broken' commits should be immediately reverted **in the workflow we currently operate**; rather I think they should be given 12 or 24 hours (probably the latter) after notification to be fixed, and then a policy of prompt reversions. Reverting what might be a 99% successful change at the drop of a hat is the worst of all worlds. If we have a clear policy then the revert makes a subtle shift from "aggressive" to "standard procedure", and we also have an upper bound on the time trunk can remain broken.
--HM
Ilmari Karonen wrote:
This made me realize something that's only tangentially related to the existing thread, namely that we're currently using the "fixme" status in Code Review for two different kinds of commits:
- commits that are broken and need to be fixed or reverted ASAP, and
- commits that do more or less work, but need some followup work.
An example of the first kind of commit would be something that throws PHP fatal errors on a substantial fraction of page views. An example of the second kind might be something as minor as forgetting to update RELEASE_NOTES.
Of course, there's also a wide range of shades of gray between these two extremes, such as changes that work most of the time but break some unusual setups or use cases. Still, I do think that most "fixme" commits can be fairly cleanly assigned to one or the other of these categories, simply by asking oneself "Can I run a usable wiki with this code as it is?"
I think it might be a good idea to split these two cases into separate states. My suggestion, off the top of my head, would be to leave "fixme" for the latter and add a new "broken" status for the former.
+1 We should also add another state for fixmes that are not about problems in the revision itself, but request for improving more code (eg. you should fix the same thing -added in MW 1.4- in other 10 locations of the code, too).
"Platonides" Platonides@gmail.com wrote in message news:imm5c2$rib$1@dough.gmane.org...
Ilmari Karonen wrote:
I think it might be a good idea to split these two cases into separate states. My suggestion, off the top of my head, would be to leave "fixme" for the latter and add a new "broken" status for the former.
+1 We should also add another state for fixmes that are not about problems in the revision itself, but request for improving more code (eg. you should fix the same thing -added in MW 1.4- in other 10 locations of the code, too).
That sort of thing should be a tag, because it is orthogonal to (and can actually change independently of) the status of the revision itself. It would make it impossible to 'ok' the revision without losing the 'extend' information, which is exactly the opposite of what we want.
--HM
Happy-melon wrote:
"Platonides" Platonides@gmail.com wrote in message news:imm5c2$rib$1@dough.gmane.org...
Ilmari Karonen wrote:
I think it might be a good idea to split these two cases into separate states. My suggestion, off the top of my head, would be to leave "fixme" for the latter and add a new "broken" status for the former.
+1 We should also add another state for fixmes that are not about problems in the revision itself, but request for improving more code (eg. you should fix the same thing -added in MW 1.4- in other 10 locations of the code, too).
That sort of thing should be a tag, because it is orthogonal to (and can actually change independently of) the status of the revision itself. It would make it impossible to 'ok' the revision without losing the 'extend' information, which is exactly the opposite of what we want.
--HM
Right. We should use an improve tag and stop using fixmes for that.
wikitech-l@lists.wikimedia.org