Per bug 12056, *some* systems appear to have bad interactions with the new preprocessor, leaving strip markers in place of <nowiki>, <ref>, etc on page edit.
Neither Tim nor I could reproduce it on our test systems... I tried switching the new code in briefly on the live system, but alas found that at least some portion of our production servers are showing it.
Will do some further investigation on this; might be interaction with another extension, or order of operations, or a weird config issue... Bleah!
Another note -- I found that *something* on the live site is triggering very deeply nested function calls in the proprocessor expansion, which triggers the 100-stack-frame-deep recursion bailout in Xdebug which is running on one box for diagnostic purposes.
Stack trace: http://wikitech.leuksman.com/view/Preprocessor_stack_trace
-- brion vibber (brion @ wikimedia.org)
On 11/30/07, Brion Vibber brion@wikimedia.org wrote:
Neither Tim nor I could reproduce it on our test systems... I tried switching the new code in briefly on the live system, but alas found that at least some portion of our production servers are showing it.
<snip>
Another note -- I found that *something* on the live site is triggering
What is "the live system/site"? Are all the Wikimedia sites, including, MediaWiki.org, Meta etc, running the same version of the software?
Sorry if it's a dumb question, I'm just thinking about the eventual roll-out of a new parser. I had sort of hoped/presumed that the new parser code would be tried out on a very low profile project first, then slowly rolled out to the other ones.
Steve
Sorry if it's a dumb question, I'm just thinking about the eventual roll-out of a new parser. I had sort of hoped/presumed that the new parser code would be tried out on a very low profile project first, then slowly rolled out to the other ones.
Both parsers could be in the code at the same time with a config option to choose between them, that way it could be turned on for only a few small projects (I strongly suggest more than one so multiple languages are involved) at first.
On 11/29/07, Steve Bennett stevagewp@gmail.com wrote:
What is "the live system/site"? Are all the Wikimedia sites, including, MediaWiki.org, Meta etc, running the same version of the software?
Yes, they're run from a single set of files stored on NFS (which are synchronized with the various Apaches by using the scap utility). The differences between sites, such as different databases and settings, are handled by, effectively, a single LocalSettings.php equivalent with conditionals based on the Host in the request.
The way large changes are usually handled is small-scale testing (on non-content wikis like private wikis or maybe test.wikipedia.org) followed by rolling out on all sites, and if necessary reverting that and doing more small-scale testing. I imagine it's not considered acceptable to have code with possibly major bugs running on even small sites, and if you think it's not buggy, best to test that by as wide a deployment as possible. The problems with the new parser were detected within minutes by a flood of English-speaking users from major wikis coming into #wikimedia-tech: how long would it have taken if we only rolled it out on some of the tiny wikis?
On 11/30/07, Simetrical Simetrical+wikilist@gmail.com wrote:
The way large changes are usually handled is small-scale testing (on non-content wikis like private wikis or maybe test.wikipedia.org) followed by rolling out on all sites, and if necessary reverting that and doing more small-scale testing. I imagine it's not considered acceptable to have code with possibly major bugs running on even small sites, and if you think it's not buggy, best to test that by as wide a
I think with the new parser, it won't be a question of buggy/not buggy, but rather, how many of the language features are supported, and how many pages are affected by the unsupported features, edge cases etc. It will be relatively(!) easy to write a parser that handles 90% of pages correctly, but then will take a careful process of rolling out to expose all the cases that aren't handled correctly.
Anyway, it sounds like technically speaking, it's possible to do this sort of roll out, so we can worry about how desirable it is, or what the best way to do it is, if and when there is something to actually roll out :)
(Thanks for the explanations, btw.)
Steve
wikitech-l@lists.wikimedia.org