Hi Pavel,
thanks for your detailed reply, I understand your position better now.
Well, as they already pointed out, throwing away the current wiki
markup would be immensely difficult because:
-the millions of pages we already have is not easy to convert in the
absence of a formalized wiki grammar
-the users who are pros in wikitext (and some of them are already
afraid that this skill will be obsolete because of the new editor,
like the thread starter)
By following this list I hope I gathered how they plan to tackle this
really hard problem:
-a functional decomposition of what the current parser does to a
separate tokenizer, an AST(aka WOM or now just DOM) builder and a
serializer. Also, AST building might be further decomposed to the
builder part and an error handling according to html specs.
-in architecture terms, all this will be a separate component, unlike
the old php parser which is really hard to take out from the rest of
the code.
In this setup there is hope that the tokenizing task can be specified
with a set of rules, thus effectively creating a wikitext tokenizing
standard (already a great leap forward!)
Then the really custom stuff (because wikitext still lacks a formal
grammar) can be encapsulated in AST building.
(I hope I reconstructed this right.)
I think this is the smartest thing to do in this situation. It will
not only enable them to create an alternative visual editor, which is
the original goal. It is more far-reaching than that.
It will also enable you to create an editor which uses the syntax you
already started to envision in this thread.
It will let me do a lot of stuff I dream of in our project, Sztakipedia.
Also, dbpedia can be more effective having this parser. Creating books
from wiki will be much easier. People will be able to migrate stuff
into mediawiki from other CMS's (also migrate the other way for that
matter). We could have wiki syntax warnings in the regular wikitext
interface, etc.
And most importantly, I'm certain it will enable many other things I
cannot foresee now. So I wish them the best of luck (as I'm sure they
will need it :)
---
Also, I want to answer to one particular point...
Knowledge
about the article one wants to edit? Surely not.
Devotion to make an edit? Probably.
Markup skills? Probably.
Tolerance for outdated interfaces? I say this, too.
Can you repeat all of this if
someone is reluctant to read Wikipedia
editing/copyright guidelines? Why in your opinion editing with plain
yet intuitive markup is different from rich editor?
Yes, I would repeat this argument about reading the guidelines too.
It have been my fixa idea (and research topic) for years that an
editor should incorporate enough intelligence to be able to represent
the community's (in this case wikipedians and readers) interests (in
this case the quality requirements) as an agent. What I imagine is a
system which does not bother me until I use a cite template for the
first time for example - but then it tries to evaluate whether I use
it according the guidelines - and probably explain me the guidelines.
It would also help me to fill infoboxes, find the right templates,
categories and links, warn me if I don't structure the article clearly
enough (a nice feature request from the Hungarian WMF :) I know most
of this is science fiction but I hope we will have something like this
in the far future :)
But my point is that the guidelines would be much easier to digest if
always presented in context and only the relevant part. Maybe I'm an
utopist but I can imagine a wikipedia where fresh editors just start
typing their knowledge with zero education and still be able to
immediately produce valuable output, provided they have good
intentions.
Best
Mihály