On Thu, Nov 15, 2007 at 10:39:55AM +0000, Thomas Dalton wrote:
More complicated things might be a little harder, but pretty much anything is possible to describe unambiguously in wikisyntax, so once we determine what each weird sequence should mean, it can be rewritten more clearly (for example, don't allow any nesting at all, and treat bold italic as a separate 3rd format).
Nope, other than it doesn't require processing the text again, and is more akin to the model of context-free grammar we're theoretically aspiring to. It seems cleaner to me to clearly define the exception to the EBNF this way, but that could be a bias not based on much real evidence.
I think the amount is processing is the same either way. With my way, we do end up with a pure EBNF parser, just with something tacked on the beginning. With your way, the EBNF part and the exception is all mixed together.
So, to be clear, you're suggesting that we subst: complicated constructs by their clear, simple equivalents, and then define the grammar based on the target of that //subst//itution?
Ok, yeah, that sounds like it might be slightly more possible.
Helps pedagogically as well, as long as users are informed that the system "slightly modified their markup to make it easier to understand."
Of course, some people might get annoyed by *that*, and they'll be power users... You really can't win, here, can you?
Cheers, -- jra