I've reverted r17507 and r17518 after getting massive complaints immediately after they went live in today's site update.
I'd like to ask that people please try to refrain from changing core styles like this without testing it against the actual site usage; for instance the prominent main pages of English and German Wikipedia and their site-specific CSS/JS.
We've gone through this dance enough times in the last few weeks I think we should all be able to realize that it's kind of disruptive and should be avoided.
-- brion vibber (brion @ pobox.com)
On 11/12/06, Brion Vibber brion@pobox.com wrote:
I've reverted r17507 and r17518 after getting massive complaints immediately after they went live in today's site update.
I'd like to ask that people please try to refrain from changing core styles like this without testing it against the actual site usage; for instance the prominent main pages of English and German Wikipedia and their site-specific CSS/JS.
We've gone through this dance enough times in the last few weeks I think we should all be able to realize that it's kind of disruptive and should be avoided.
Sorry, sorry . . . it's rather poor of me to constantly forget about custom styles when most of what I'm committing is UI stuff. Unfortunately some disruption is inevitable for this kind of stuff, which I suppose suggests it should be condensed and spaced out to the extent possible, with ample forewarning every time a batch is going to be committed. Maybe I should make a branch where this kind of potentially disruptive stuff can be committed, and then we can let stuff accumulate there for a couple of months and until we announce all the changes and merge to trunk? Does that sound like a good idea?
On 11/12/06, Simetrical Simetrical+wikitech@gmail.com wrote:
On 11/12/06, Brion Vibber brion@pobox.com wrote:
I've reverted r17507 and r17518 after getting massive complaints immediately after they went live in today's site update.
I'd like to ask that people please try to refrain from changing core styles like this without testing it against the actual site usage; for instance the prominent main pages of English and German Wikipedia and their site-specific CSS/JS.
We've gone through this dance enough times in the last few weeks I think we should all be able to realize that it's kind of disruptive and should be avoided.
Sorry, sorry . . . it's rather poor of me to constantly forget about custom styles when most of what I'm committing is UI stuff. Unfortunately some disruption is inevitable for this kind of stuff, which I suppose suggests it should be condensed and spaced out to the extent possible, with ample forewarning every time a batch is going to be committed. Maybe I should make a branch where this kind of potentially disruptive stuff can be committed, and then we can let stuff accumulate there for a couple of months and until we announce all the changes and merge to trunk? Does that sound like a good idea?
Changes to the layout should be proposed before they are made. They need to be tested with current wiki dumps on the largest projects so the actual effects of the changes can be determined. Once that is done, common and skin custom stylesheets can be adapted. And only after all that's done, they should be committed.
The current way of doing things is a nightmare from a customer service position because users will notice that something's wrong, some of them will ask at the VP or equivalents, and very few will even know what the problem is or what caused it. So some of those few end up running to IRC to find a developer who can help fix things by either reverting or explaining. All the while, discussion continues, often at a level quite beyond what one would expect based on the triviality of the actual issue. Then accusations are made that somebody was arbitrarily deciding "major things" without process.
Unfortunately, I'm not making this up. This actually all happened when the category trees were added and the way category pages were displayed changed. Ridiculous? Absolutely. Yet ample reason to work so it doesn't happen again.
sebmol
Not related to this specific change, but since some weeks ago someone changed something and now every table displayed in some namespaces in some wikis is... white.
Examples
http://pt.wikisource.org/wiki/Especial:Newpages http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost
On 11/12/06, Sebastian Moleski sebmol@gmail.com wrote:
On 11/12/06, Simetrical Simetrical+wikitech@gmail.com wrote:
On 11/12/06, Brion Vibber brion@pobox.com wrote:
I've reverted r17507 and r17518 after getting massive complaints immediately after they went live in today's site update.
I'd like to ask that people please try to refrain from changing core styles like this without testing it against the actual site usage; for instance the prominent main pages of English and German Wikipedia and their site-specific CSS/JS.
We've gone through this dance enough times in the last few weeks I
think
we should all be able to realize that it's kind of disruptive and
should
be avoided.
Sorry, sorry . . . it's rather poor of me to constantly forget about custom styles when most of what I'm committing is UI stuff. Unfortunately some disruption is inevitable for this kind of stuff, which I suppose suggests it should be condensed and spaced out to the extent possible, with ample forewarning every time a batch is going to be committed. Maybe I should make a branch where this kind of potentially disruptive stuff can be committed, and then we can let stuff accumulate there for a couple of months and until we announce all the changes and merge to trunk? Does that sound like a good idea?
Changes to the layout should be proposed before they are made. They need to be tested with current wiki dumps on the largest projects so the actual effects of the changes can be determined. Once that is done, common and skin custom stylesheets can be adapted. And only after all that's done, they should be committed.
The current way of doing things is a nightmare from a customer service position because users will notice that something's wrong, some of them will ask at the VP or equivalents, and very few will even know what the problem is or what caused it. So some of those few end up running to IRC to find a developer who can help fix things by either reverting or explaining. All the while, discussion continues, often at a level quite beyond what one would expect based on the triviality of the actual issue. Then accusations are made that somebody was arbitrarily deciding "major things" without process.
Unfortunately, I'm not making this up. This actually all happened when the category trees were added and the way category pages were displayed changed. Ridiculous? Absolutely. Yet ample reason to work so it doesn't happen again.
sebmol _______________________________________________ Wikitech-l mailing list Wikitech-l@wikimedia.org http://mail.wikipedia.org/mailman/listinfo/wikitech-l
Luiz Augusto wrote:
Not related to this specific change, but since some weeks ago someone changed something and now every table displayed in some namespaces in some wikis is... white.
That's how the style sheets have been for the last couple years, with the exception of a period of a few weeks this summer.
They were briefly changed in July, then switched back to the original in August or September following many complaints about the side-effects.
-- brion vibber (brion @ pobox.com)
On 11/12/06, Sebastian Moleski sebmol@gmail.com wrote:
On 11/12/06, Simetrical Simetrical+wikitech@gmail.com wrote:
On 11/12/06, Brion Vibber brion@pobox.com wrote:
I've reverted r17507 and r17518 after getting massive complaints immediately after they went live in today's site update.
I'd like to ask that people please try to refrain from changing core styles like this without testing it against the actual site usage; for instance the prominent main pages of English and German Wikipedia and their site-specific CSS/JS.
We've gone through this dance enough times in the last few weeks I think we should all be able to realize that it's kind of disruptive and should be avoided.
Sorry, sorry . . . it's rather poor of me to constantly forget about custom styles when most of what I'm committing is UI stuff. Unfortunately some disruption is inevitable for this kind of stuff, which I suppose suggests it should be condensed and spaced out to the extent possible, with ample forewarning every time a batch is going to be committed. Maybe I should make a branch where this kind of potentially disruptive stuff can be committed, and then we can let stuff accumulate there for a couple of months and until we announce all the changes and merge to trunk? Does that sound like a good idea?
Changes to the layout should be proposed before they are made. They need to be tested with current wiki dumps on the largest projects so the actual effects of the changes can be determined. Once that is done, common and skin custom stylesheets can be adapted. And only after all that's done, they should be committed.
The current way of doing things is a nightmare from a customer service position because users will notice that something's wrong, some of them will ask at the VP or equivalents, and very few will even know what the problem is or what caused it. So some of those few end up running to IRC to find a developer who can help fix things by either reverting or explaining. All the while, discussion continues, often at a level quite beyond what one would expect based on the triviality of the actual issue. Then accusations are made that somebody was arbitrarily deciding "major things" without process.
Unfortunately, I'm not making this up. This actually all happened when the category trees were added and the way category pages were displayed changed. Ridiculous? Absolutely. Yet ample reason to work so it doesn't happen again.
I don't think it's a good idea to jump through hoops when adding new features. That's just silly. Clear announcement is necessary when a change will *break preexisting functionality* such as user customizations. I don't think discussion or announcement is necessary for the implementation of http://bugzilla.wikimedia.org/show_bug.cgi?id=1578, for instance.
On 11/12/06, Luiz Augusto lugusto@gmail.com wrote:
Not related to this specific change, but since some weeks ago someone changed something and now every table displayed in some namespaces in some wikis is... white.
Examples
http://pt.wikisource.org/wiki/Especial:Newpages http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost
They were like that forever until I bugged the devs to supply a patch, but that broke other things so I reverted it later. It's due to a flawed implementation of non-main-namespace coloring that many wikis have adopted, not a software issue: in Monobook the backgrounds of all namespaces are white.
Adding the following to MediaWiki:Monobook.css *should* fix it (filling in the correct color instead of #F8FCFF, which is the color used by enwiki):
table { background-color: #F8FCFF; } .ns-0 * table { background-color: white; }
On 11/12/06, Simetrical Simetrical+wikitech@gmail.com wrote:
On 11/12/06, Sebastian Moleski sebmol@gmail.com wrote:
On 11/12/06, Simetrical Simetrical+wikitech@gmail.com wrote:
On 11/12/06, Brion Vibber brion@pobox.com wrote:
I've reverted r17507 and r17518 after getting massive complaints immediately after they went live in today's site update.
I'd like to ask that people please try to refrain from changing core styles like this without testing it against the actual site usage; for instance the prominent main pages of English and German Wikipedia and their site-specific CSS/JS.
We've gone through this dance enough times in the last few weeks I think we should all be able to realize that it's kind of disruptive and should be avoided.
Sorry, sorry . . . it's rather poor of me to constantly forget about custom styles when most of what I'm committing is UI stuff. Unfortunately some disruption is inevitable for this kind of stuff, which I suppose suggests it should be condensed and spaced out to the extent possible, with ample forewarning every time a batch is going to be committed. Maybe I should make a branch where this kind of potentially disruptive stuff can be committed, and then we can let stuff accumulate there for a couple of months and until we announce all the changes and merge to trunk? Does that sound like a good idea?
Changes to the layout should be proposed before they are made. They need to be tested with current wiki dumps on the largest projects so the actual effects of the changes can be determined. Once that is done, common and skin custom stylesheets can be adapted. And only after all that's done, they should be committed.
The current way of doing things is a nightmare from a customer service position because users will notice that something's wrong, some of them will ask at the VP or equivalents, and very few will even know what the problem is or what caused it. So some of those few end up running to IRC to find a developer who can help fix things by either reverting or explaining. All the while, discussion continues, often at a level quite beyond what one would expect based on the triviality of the actual issue. Then accusations are made that somebody was arbitrarily deciding "major things" without process.
Unfortunately, I'm not making this up. This actually all happened when the category trees were added and the way category pages were displayed changed. Ridiculous? Absolutely. Yet ample reason to work so it doesn't happen again.
I don't think it's a good idea to jump through hoops when adding new features. That's just silly. Clear announcement is necessary when a change will *break preexisting functionality* such as user customizations. I don't think discussion or announcement is necessary for the implementation of http://bugzilla.wikimedia.org/show_bug.cgi?id=1578, for instance.
On 11/12/06, Luiz Augusto lugusto@gmail.com wrote:
Not related to this specific change, but since some weeks ago someone changed something and now every table displayed in some namespaces in some wikis is... white.
Examples
http://pt.wikisource.org/wiki/Especial:Newpages http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost
They were like that forever until I bugged the devs to supply a patch, but that broke other things so I reverted it later. It's due to a flawed implementation of non-main-namespace coloring that many wikis have adopted, not a software issue: in Monobook the backgrounds of all namespaces are white.
Adding the following to MediaWiki:Monobook.css *should* fix it (filling in the correct color instead of #F8FCFF, which is the color used by enwiki):
table { background-color: #F8FCFF; } .ns-0 * table { background-color: white; } _______________________________________________
This brings up a dev/QA related question.
Is it possible to run multiple instances of Mediawiki all talking to the same database (table sets, etc), assuming that the db formats haven't changed between the Mediawiki versions used in those instances?
What I'm thinking is that a beta.en.wikipedia.org instance wouldn't be so bad, if it had the same data on the back end.
What I'm thinking is that a beta.en.wikipedia.org instance wouldn't be so bad, if it had the same data on the back end.
Along a similar vein, would it help to have a public test wiki that always ran the absolute bleeding edge? (As opposed to test.wikipedia.org which runs an NFS checkout before the sync, and as such usually differs little from most of the Wikipedia sites). Example hostname: http://dogfood.wikipedia.org/ . That way keen people could try new stuff out earlier, and maybe we might get some feedback sooner, before changes impact the whole userbase, thus possibly resulting in greater harmony?
All the best, Nick.
On 11/13/06, Nick Jenkins nickpj@gmail.com wrote:
What I'm thinking is that a beta.en.wikipedia.org instance wouldn't be so bad, if it had the same data on the back end.
Along a similar vein, would it help to have a public test wiki that always ran the absolute bleeding edge? (As opposed to test.wikipedia.org which runs an NFS checkout before the sync, and as such usually differs little from most of the Wikipedia sites). Example hostname: http://dogfood.wikipedia.org/ . That way keen people could try new stuff out earlier, and maybe we might get some feedback sooner, before changes impact the whole userbase, thus possibly resulting in greater harmony?
It would definitely be really spiffy if we had a public test wiki for code, but I'm not sure running from trunk would be a good idea. It would encourage people to commit to trunk for testing, which is kind of a Bad Idea. Maybe a branch that's auto-synced with trunk every so often or something, but the logistics could be tricky (how would it resolve conflicts? would it clog up the SVN list?).
Nick Jenkins wrote:
What I'm thinking is that a beta.en.wikipedia.org instance wouldn't be so bad, if it had the same data on the back end.
Along a similar vein, would it help to have a public test wiki that always ran the absolute bleeding edge? (As opposed to test.wikipedia.org which runs an NFS checkout before the sync, and as such usually differs little from most of the Wikipedia sites). Example hostname: http://dogfood.wikipedia.org/ . That way keen people could try new stuff out earlier, and maybe we might get some feedback sooner, before changes impact the whole userbase, thus possibly resulting in greater harmony?
There's test.leuksman.com. For security reasons, we wouldn't want such a thing on the main cluster, and after SUL is enabled, we won't be able to have it on a *.wikipedia.org domain either.
In any case, it wouldn't be very much earlier than test.wikipedia.org, just a few days in most cases. I don't think it's going to work to rely on random community review in that period, I think we need to have a testing procedure for CSS changes which allow us to actively test for this kind of problem.
I reviewed the diff before I put these changes live, and I checked test.wikipedia.org for obvious fatal errors. I would have visually checked the English Wikipedia main page too, before sync, if we had a procedure in place for that. George Herbert's suggestion to have a beta.en.wikipedia.org running on the same database would be one to do that.
-- Tim Starling
Is it possible to run multiple instances of Mediawiki all talking to the same database (table sets, etc), assuming that the db formats haven't changed between the Mediawiki versions used in those instances?
What I'm thinking is that a beta.en.wikipedia.org instance wouldn't be so bad, if it had the same data on the back end.
I really like that idea. I would use such a beta instance and report problems as I find them (which is the purpose of a beta-test). I would certainly /not/ use a separate "beta-test" wiki if it isn't Wikipedia, because that's extra work and it's boring compared to Wikipedia. I'm sure at least 90% of the users feel that way.
Timwi
I would certainly /not/ use a separate "beta-test" wiki if it isn't Wikipedia, because that's extra work and it's boring compared to Wikipedia. I'm sure at least 90% of the users feel that way.
Fair point, and I think you're right in that test wikis maybe aren't used as much as they could be, mostly because their limited content means they're just not very interesting to use.
I really like that idea. I would use such a beta instance and report problems as I find them (which is the purpose of a beta-test).
But from what Tim was saying, I think (and someone please correct me if I'm wrong here) that the idea was to have a test wiki which would be updated as one of the last steps before rolling a software update out onto the cluster. The beta site would be updated, and then there would be a quick smoke test by the person doing the roll out ("does the front page look normal?", "do a few other pages look normal?"), and if nothing abnormal was observed, then it would be rolled out onto the cluster.
As a consequence of this, from a tester's point of view, you'd normally only have a window of a few minutes in which the software you were testing differed from the software elsewhere.
Basically this setup would be useful as a quality-control checklist item to the person doing the rollout; however it would not be useful for beta-testers, because you're not actually beta-testing anything new (apart from in the brief few minutes before rollout).
So if you want to use the bleeding edge, report issues found, and act as an early-adopter who gives early-warning feedback on changes before they reach the general population, then I think this may not be what you're looking for.
Of course, I could have misunderstood what was being proposed.
All the best, Nick.
Nick Jenkins wrote:
I really like that idea. I would use such a beta instance and report problems as I find them (which is the purpose of a beta-test).
But from what Tim was saying, I think (and someone please correct me if I'm wrong here) that the idea was to have a test wiki which would be updated as one of the last steps before rolling a software update out onto the cluster. The beta site would be updated, and then there would be a quick smoke test by the person doing the roll out ("does the front page look normal?", "do a few other pages look normal?"), and if nothing abnormal was observed, then it would be rolled out onto the cluster.
That's what test.wikipedia.org is now; but of course it doesn't contain any real content so it's boring. :)
The issue we've got here is with unexpected interactions with customized CSS and JavaScript on each of several dozen large, active wikis (and hundreds more smaller ones). That's a lot tougher to smoke-test because the custom styles and the funny layouts aren't *on* the test site.
What is suggested is having test site(s) which show the actual content from the main sites.
There's a couple possible ways to handle this:
1) Have a read-write copy that periodically repopulates its dataset from a live site. Probably pretty safe.
2) Have a read-only configuration pulling live data from the live database on the main server. Hopefully safe if no information-leakage bugs, but less to test.
3) Have a read-write configuration using live data with alternate new code. Potentially very unsafe.
For instance we could have copies of, say, English and German Wikipedia that refresh the current-version data each week.
The question then is frequency of code updates.
One reason we don't automatically update code is security and data safety: sometimes new code includes XSS or SQL injection vectors, unsafe code that could corrupt data, or simply inefficient code that could pound the databases too hard. Waiting for at least some review to take them live provides some additional safety (though certainly some such problems don't get caught during that time).
Understandably we may be a bit reluctant to relax this rule if the code's running on live data, or even on alternate data on the same machines.
If we had a long development cycle before deployment (something we've tried to do before), then a beta period with duplicate sites would make a lot of sense. I've seen other big sites like Slashdot do this; users are asked to hit the copy site running the new software for a few days and work out problems, then the test data vanishes when the real upgrade happens.
test.wikipedia.org was originally populated with a subset of English Wikipedia data to do exactly that.
We never really got the testing we needed when we tried that, though; most problem reports didn't come until after the big upgrade happened -- and then we had to spend the next month chasing down bug after bug after bug.
That sort of thing is why we abandoned that development model and moved to continuous integration, with smaller changes going live pretty quickly and being tuned.
Unfortunately these silly style-type issues are disproportionately disruptive because of their visibility.
-- brion vibber (brion @ pobox.com)
Brion Vibber wrote:
There's a couple possible ways to handle this:
- Have a read-write copy that periodically repopulates its dataset from
a live site. Probably pretty safe.
- Have a read-only configuration pulling live data from the live
database on the main server. Hopefully safe if no information-leakage bugs, but less to test.
Both of these options have the same problem as test.wikipedia.org: You can't use them for real, therefore people won't. The only thing that would keep people actively beta-testing is by letting them use the new code on the live database.
- Have a read-write configuration using live data with alternate new
code. Potentially very unsafe.
It would be "potentially very unsafe" only if you place completely untested code on it. I thought the idea was to place code on it that would otherwise have already gone out to the live site. Surely it's much safer to beta-test it first than to push it live immediately. In other words, I see it as an _addition_ to the development process you are already using, and it would catch problems such as the CSS/JavaScript one you mentioned before it goes live.
At one point, LiveJournal introduced a system whereby you can choose to use either the latest stable code (the default for everyone), or the beta-test code (users had to explicitly set this). They did this by setting a cookie, and the cookie would cause their internal proxies to choose the right server with the right version of the code. Do you think something like that could be implemented on Wikipedia? This has two huge benefits: * It would work for all Wikimedia sites, not just Wikipedia, much less just the English Wikipedia; and * People wouldn't get so upset anymore about seeing problems because they have explicitly opted in for beta-test, and (ideally) would be aware that the general public isn't seeing the problem, while at the same time being encouraged to report the problem so that it can be fixed before the general public gets to see it.
Does this make sense to you? Please ask if anything about the above is unclear.
Timwi
On Wed, Nov 15, 2006 at 08:41:38PM +0000, Timwi wrote:
At one point, LiveJournal introduced a system whereby you can choose to use either the latest stable code (the default for everyone), or the beta-test code (users had to explicitly set this). They did this by setting a cookie, and the cookie would cause their internal proxies to choose the right server with the right version of the code. Do you think something like that could be implemented on Wikipedia? This has two huge benefits:
- It would work for all Wikimedia sites, not just Wikipedia, much less just the English Wikipedia; and
- People wouldn't get so upset anymore about seeing problems because they have explicitly opted in for beta-test, and (ideally) would be aware that the general public isn't seeing the problem, while at the same time being encouraged to report the problem so that it can be fixed before the general public gets to see it.
Does this make sense to you? Please ask if anything about the above is unclear.
Sounds perfectly sensible to me... until something creeps into the beta that trashes the databases. But as you note, the code we'd be running on the beta is the code we now just push to production, so the danger level can't be any higher.
Except that conscious beta-testers may hammer harder and therefore be more likely to break stuff, I guess...
Cheers, -- jra
On 11/15/06, Timwi timwi@gmx.net wrote:
Brion Vibber wrote:
- Have a read-write configuration using live data with alternate new
code. Potentially very unsafe.
It would be "potentially very unsafe" only if you place completely untested code on it. I thought the idea was to place code on it that would otherwise have already gone out to the live site. Surely it's much safer to beta-test it first than to push it live immediately. In other words, I see it as an _addition_ to the development process you are already using, and it would catch problems such as the CSS/JavaScript one you mentioned before it goes live.
I was hoping that there was a clear enough boundary between "database interaction stuff" and "other wiki code" that we could perhaps update the other stuff without touching the (potentially harmful) database interaction stuff. But sifting around the source a bit is making me more nervous about that.
Operationally, how do you handle it if you do upgrades that require database changes? What are the existing DB upgrade and rollback procedures?
Thanks.
What is suggested is having test site(s) which show the actual content from the main sites.
There's a couple possible ways to handle this:
- Have a read-write copy that periodically repopulates its dataset from
a live site. Probably pretty safe.
- Have a read-only configuration pulling live data from the live
database on the main server. Hopefully safe if no information-leakage bugs, but less to test.
- Have a read-write configuration using live data with alternate new
code. Potentially very unsafe.
For instance we could have copies of, say, English and German Wikipedia that refresh the current-version data each week.
The question then is frequency of code updates.
You could have a system like the Debian folks have, with a progression from Unstable -> Testing -> Stable for any software. (Except retaining the current continuous integration approach, to prevent the huge gaps between stable releases that have occurred in Debian).
For the first line of defence, how about Option 2), with automated rollout of the latest SVN whenever there have been no commits in the last 2 hours? And *maybe* error_reporting set to E_ALL (just for this read-only test site) with errors either echoed to the browser or echoed onto #mediawiki (so that problems are easy to spot, and hopefully easy to fix, as "given enough eyeballs, all bugs are shallow").
That would have caught the original style thing; it also provides a safety valve so that anything clearly malicious or dangerous can be caught, and reverted within the 2 hours; once set up it hopefully requires minimal or no manual intervention; it's relatively safe; and by printing out warnings it makes any errors more obvious before review and scap.
Then optionally, there could be Timwi's proposed beta-user site, in read-write mode, with live data. Getting the software onto this site would require a review, just as per currently. Then once the software has been used a bit there, it could be rolled out onto the cluster. This has the benefit that any major problems impact a smaller group of people, and the people it does impact have self-selected to be beta testers. Beta sites could be restricted to say the English and German Wikipedias, to keep it manageable.
Essentially the flow of software at the moment I think looks something like this:
+-----+ +---------+ +----------+ | | review | test.wp | * copy | cluster, | | SVN | -- and --> | r/w but | --- from --> | r/w real | | | scap | no data | NFS | data | +-----+ +---------+ +----------+ ^ | | / --- fix created <----- probs found <------
What if it were something like this:
Unstable Testing/Beta Stable +-----+ +---------+ +-----------+ +----------+ | | * 2 hrs | read- | review | Guinea | % no big | cluster, | | SVN | -- w/ no --> | only WP | -- & --> | Pig r/w |-- probs -->| r/w real | | | change | mirror | scap | real data | found | data | +-----+ +---------+ +-----------+ +----------+ ^ | | | V / --- fix created <-- probs found <-------
* = no or very limited manual intervention required.
% = The trick here is to find a way to get the "no big probs found" rollout step done without creating a lot of extra work, so as to make it practical. The code has already been reviewed at this point, so the only question is "have the beta testers reported any new regressions?" - if the answer is yes, then you block until the answer is no, and if the answer is no, then you roll out to the cluster. There also needs to be enough time for problems to be found (e.g. 1 or 2 days), and it has to be clear to the beta testers how to report problems (e.g. do they log bugs / mail wikitech-l / post at village pump technical / something else).
That sort of thing is why we abandoned that development model and moved to continuous integration, with smaller changes going live pretty quickly and being tuned.
Continuous integration works; no reason to stop using it. The above just lets problems be found sooner (by adding a smoke test step), and with an impact on fewer people (by adding a beta-tester step).
All the best, Nick.
On 11/13/06, Simetrical Simetrical+wikitech@gmail.com wrote:
I don't think it's a good idea to jump through hoops when adding new features. That's just silly. Clear announcement is necessary when a
I agree that we should keep it easy to improve the UI - we all know it needs improvement in many ways.
change will *break preexisting functionality* such as user customizations. I don't think discussion or announcement is necessary for the implementation of http://bugzilla.wikimedia.org/show_bug.cgi?id=1578, for instance.
It would be *nice* (not necessary) to have fixes even as minor as that announced at the time they're committed. Hell, we could even congratulate the developer who implements them. And if they do break something, someone will be able to do something about it.
Or is there some kind of blog or RSS feed or something we could subscribe to, to see what's happening? Hopefully a little more accessible than just a subversion log?
Steve
On 11/17/06, Steve Bennett stevagewp@gmail.com wrote:
It would be *nice* (not necessary) to have fixes even as minor as that announced at the time they're committed. Hell, we could even congratulate the developer who implements them. And if they do break something, someone will be able to do something about it.
Or is there some kind of blog or RSS feed or something we could subscribe to, to see what's happening? Hopefully a little more accessible than just a subversion log?
I just restarted the Wikipedia Signpost's Technical Report last week, so if you read that it will tell you about all the changes made over the previous week. http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost Of course, it's not aligned with code synchronization, which happen on an ad-hoc basis, so you might get the news when the feature's already been live for a few days. For more up-to-date updates you can watchlist the user subpage where I keep my drafts of the coming week's report, at http://en.wikipedia.org/wiki/User:Simetrical/BRION.
On 11/19/06, Simetrical Simetrical+wikitech@gmail.com wrote:
I just restarted the Wikipedia Signpost's Technical Report last week, so if you read that it will tell you about all the changes made over the previous week. http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost Of course, it's not aligned with code synchronization, which happen on an ad-hoc basis, so you might get the news when the feature's already been live for a few days. For more up-to-date updates you can
Cool, very nice - how do I RSS that?
Steve
On 11/19/06, Steve Bennett stevagewp@gmail.com wrote:
http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost Of
Cool, very nice - how do I RSS that?
The RSS feeds for it are usually broken. You can sign up for notifications via wiki or email as an alternative though: http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/Tools/Spamlist
Angela
Simetrical <Simetrical+wikitech@...> writes:
Sorry, sorry . . . it's rather poor of me to constantly forget about custom styles when most of what I'm committing is UI stuff. Unfortunately some disruption is inevitable for this kind of stuff, which I suppose suggests it should be condensed and spaced out to the extent possible, with ample forewarning every time a batch is going to be committed. Maybe I should make a branch where this kind of potentially disruptive stuff can be committed, and then we can let stuff accumulate there for a couple of months and until we announce all the changes and merge to trunk? Does that sound like a good idea?
If changes that can give trouble for wikis need to be announced please inform the internal news media.
http://meta.wikimedia.org/wiki/Internal_news_media
Especially Wikizine is highly interested to carry this type of news. But it need to be announced on time. On time is a bit more the one week in advance depending when the last edition was send out. And if it is not submitted directly to wikizine it need to discovered. So it can be that the notification is not noticed.
[[meta:user:Walter]]
wikitech-l@lists.wikimedia.org