Steve Summit wrote:
Jay Ashworth wrote:
The fundamental recurring argument seems to me to be "we can't do that; we'll break too much stuff."
The last time this came up (or maybe it was five or ten times ago; I can't keep track) I think I remember Brion stating pretty emphatically that no change to the parser could be contemplated if it broke *any* stuff.
That's obviously an exaggeration... after all, our existing parser breaks some stuff itself. ;)
But if we do break additional things, we should be careful about what we break, considering why it's broken, if it was wrong in the first place, and what the impact would be. Obviously any major parser change needs to be well tested against the existing tests and a lot of live content to see where the differences in behavior are.
And on one level I think I agree: a cavalier change, that might break stuff and take some time to clean up after, is a very different prospect on a project with 100 pages, or even 10,000 pages, than it is on one with 2,000,000 pages.
*nod*
Steve Bennett wrote:
That's why I'd like input from Brion and/or other "senior developers" on ideas like a gradual transition from one parser to another. The current grammar has a lot of fat we don't need. Let's trim it.
Not going to happen. :) A new parser will need to be a drop-in which works as correctly as possible.
While that likely would mean changing some corner-case behavior (as noted above, the existing parser doesn't always do what's desired), it would not be a different *syntax* from the human perspective.
(Though a computer scientist would consider it distinct as it wouldn't be identical.)
-- brion vibber (brion @ wikimedia.org)