Okay, okay, guys, I give up. Either I'm misunderstanding some crucial bit or I'm misunderstanding the concept of wiki and programming at a whole. But in any case this is unimportant.
One thing I can agree with is the latest David Gerard's message, though.
Signed, P. Tkachenko
2012/2/12 Mihály Héder hedermisi@gmail.com:
Hello,
Um, I see that you disagree, but I don't think that you proposed arguments that actually support your position.
On 12 February 2012 12:34, Pavel Tkachenko proger.xp@gmail.com wrote:
On Wed, 8 Feb 2012 15:20:41 +0100, Mihaly Heder hedermisi@gmail.com
If wikitext is going to be replaced the new language should be designed on an abstract level first.
This is correct but if we're talking about a universal DOM that could represent all potential syntax and has space for extensions (nodes of the new type can be safely added in future) then new markup can be discussed in general terms before the DOM itself.
I don't think so. They are not talking about DOM in general, which in itself is not even context free. They have to design a language that can be represented in DOM, and have a fixed set of language constructs and therefore it is context free. Without that they cannot make a new parser work.
It doesn't really matter unless we start correlating DOM and markup - then it will be time for BNFs.
If they don't correlate the DOM with the markup then what is the point of the DOM language? Also, in the case of the old grammar BNF won't be the way to go. The correlation will happen in custom parser code, and this is unavoidable.
So the real question is whether a new-gen wiki syntax will be compatible with a consensual data model we might have in the future.
I don't think it's a good idea to design wiki DOM and new wiki syntax separately, otherwise it'll be the same trouble current wikitext is stuck in.
I don't think that this remark is relevant, as they are not designing a new wiki syntax. They have to keep the old one.
The real problem is whether the core devs is interested in new markup at all or not. I don't think anything difficult in designing new DOM except a few tricky places (templates, inclusions) but it should not take another year to be complete, definitely not.
The muscle is in the parser that can completely parse the old syntax into the new DOM language. BNFs won't solve that. I don't know this team in person, so I cannot judge their capabilities. But I can tell you that here in Budapest we had a really talented MSc student working on such a thing for about a year and we could not get even close to 100% compatibility (not even 90%...)