On Wed, Oct 20, 2010 at 11:56 PM, Rob Lanphier robla@wikimedia.org wrote:
- Is the release cadence is more important (i.e. reverting features
if they pose a schedule risk) or is shipping a set of features is important (i.e. slipping the date if one of the predetermined feature isn't ready)? For example, as pointed out in another thread + IRC, there was a suggestion for creating a branch point prior to the introduction of the Resource Loader.[1] Is our priority going to be about ensuring a fixed list of features is ready to go, or should we be ruthless about cutting features to make a date, even if there isn't much left on the feature list for that date?
IMO, the best release approach is to set a timeline for branching and then release the branch when it's done. This is basically how the Linux kernel works, for example, and how MediaWiki historically worked up to about 1.15. We'd branch every three months, then give it a while to stabilize before making an RC, then make however many RCs were necessary to stabilize. This gives pretty predictable release schedules in practice (until releases fell by the wayside for us after 1.15 or so), but not anything that we're forced to commit to.
(Actually, Linux differs a lot, because the official repository has a brief merge window followed by a multi-month code freeze, and actual development occurs in dozens of different trees managed by different people on their own schedules. But as far as the release schedule goes, it's "branch on a consistent timeframe and then release when it's ready", with initial branching time-based but release entirely unconstrained. So in that respect it's similar to how we used to do things.)
I don't think it's a good idea to insist on an exact release date, as Ubuntu does, or even to set an exact release date at all. That could force us to release with significant regressions if they come up at the last minute. On the other hand, I don't see any real benefits. Does anyone care exactly when MediaWiki is released? If so, why can't they just use RCs? The RC tarball is just as easy to unpack as the release tarball.
I also don't think it makes any sense for us to do feature-based releases. The way that would work is to decide on what features you want in the release, then allocate resources to get those features done in time. But Wikimedia currently doesn't use the releases, it deploys new features continually. So resources will naturally not be targeted at the release date, they'll be targeted for deployment whenever they're done. Wikimedia has no big reason to pay people to rush to complete something in time for a release that it isn't going to use anyway.
Furthermore, even if Wikimedia did use releases -- IIRC, you thought that was a reasonable plan when this came up before -- I still think feature-based releases are a bad idea. It encourages you to either delay releases excessively or ship half-baked features. If you instead say that you'll ship whatever is mature at the time of release, with no commitment to what makes it in, it encourages more focus on correctness and quality. Feature-based releases really only belong in the proprietary software world, where the vendor needs a feature list to encourage people to pay for the new version.
- Projects with generally predictable schedules also have a process
for deciding early in the cycle what is going to be in the release. For example, in Ubuntu's most recently completed release schedule [2], they alloted a little over 23 weeks for development (a little over 5 months). The release team slated a "Feature Definition Freeze" on June 17 (week 7), with what I understand was a pretty high bar for getting new features listed after that, and a feature freeze on August 12 (week 15). Many features originally slated in the feature definition were cut. Right now, we have nothing approaching that level of formality. Should we?
IMO, no. I think it's best to just ship whatever's done when the release branch is made. Processes like Ubuntu or Mozilla have only make sense when the organization paying for development is primarily interested in the actual release, not when the organization is primarily interested in its own use of the product. In the latter case, it makes much more sense to do incremental development and deployment and do releases mostly as an afterthought.
Wikimedia is in an unusual position here, really. Very few sites that pay for in-house code development for their own use then make real open-source releases of it. Either they keep it closed or just throw source over the wall occasionally, or they're interested mostly in getting third parties to use it. I'm not personally familiar with other open-source projects in a similar position to us, although they exist (like StatusNet?). We have to be careful with analogies to software development that's dissimilar in purpose to ours.
- How deep is the belief that Wikimedia production deployment must
precede a MediaWiki tarball release? Put another way, how tightly are they coupled?
IMO, it's essential that Wikimedia get back to incrementally deploying trunk instead of a separate branch. Wikipedia is a great place to test new features, and we're in a uniquely good position to do so, since we wrote the code and can very quickly fix any reported bugs. Wikipedia users are also much more aware of MediaWiki development and much more likely to know who to report bugs to. I think any site that's in a position to use its own software (even if it's closed-source) should deploy it first internally, and if I'm not mistaken, this is actually a very common practice.
Beyond that, this development model also gives volunteers immediate reward for their efforts, in that they can see their new code live within a few days. When a Wikipedia user reports a bug, it's very satisfying to be able to say "Fixed in rXXXXX, you should see the fix within a week". It's just not the same if the fix won't be deployed for months.