On Tue, Nov 13, 2007 at 03:12:24PM -0500, Brion Vibber wrote:
Steve Summit wrote:
Jay Ashworth wrote:
The fundamental recurring argument seems to me to be "we can't do that; we'll break too much stuff."
The last time this came up (or maybe it was five or ten times ago; I can't keep track) I think I remember Brion stating pretty emphatically that no change to the parser could be contemplated if it broke *any* stuff.
That's obviously an exaggeration... after all, our existing parser breaks some stuff itself. ;)
But if we do break additional things, we should be careful about what we break, considering why it's broken, if it was wrong in the first place, and what the impact would be. Obviously any major parser change needs to be well tested against the existing tests and a lot of live content to see where the differences in behavior are.
Certainly.
I thought I'd suggested that in pretty nauseating detail, in... oh, one of these five threads. ;-)
Steve Bennett wrote:
That's why I'd like input from Brion and/or other "senior developers" on ideas like a gradual transition from one parser to another. The current grammar has a lot of fat we don't need. Let's trim it.
Not going to happen. :) A new parser will need to be a drop-in which works as correctly as possible.
While that likely would mean changing some corner-case behavior (as noted above, the existing parser doesn't always do what's desired), it would not be a different *syntax* from the human perspective.
Well, the fundamental point of these five threads was *there is no formal specification*. That means that any replacement has to *exactly match* the behaviour of the current code.
If that's not what we want, then we need to decide on a specced behavior for wikitext, and we can't back one out of the current parser, since as you note, there are already corner cases.
(Though a computer scientist would consider it distinct as it wouldn't be identical.)
Computer scientists? Here?
Cheers, -- jra