Hi all, I've published what I'm calling (for no good reason) "draft 10" here:
http://www.mediawiki.org/wiki/Markup_spec/ANTLR/draft
Mostly, I got to a certain level of feature completeness. Specifically, the list of 4 features that were previously missing (tables, magic words, categories and inline HTML) have been implemented.
I redid the table stuff - turns out I was getting too fancy for my own good. I now do less semantic checking, and am thus much more tolerant of borderline input.
I've also cleaned it up a bit and have roughly grouped all the rules into levels, thus:
Top level, block elements:
line: (table) => table^ | (headerline) => headerline^ | (listmarker) => listline^ | (hrline) => hrline^ | (spaceline) => spaceline^ | paragraph^ ;
Next level, inline text (generally, stuff that appears within a line, and doesn't contain new lines)
inline_text @init { text_levels++; } : ( ((LEFT_BRACKET LEFT_BRACKET LEFT_BRACKET) => literal_left_bracket |(literal_left_bracket bracketed_url) => literal_left_bracket |(image) => image |(category) => category |(external_link) => external_link |(internal_link) => internal_link |(magic_link) => magic_link |(magic_word) => magic_word |pre_block |(formatted_text_elem) =>formatted_text_elem ) ((nbsp_before_punctuation) => nbsp_before_punctuation)? ((ws) =>printing_ws)? )+; finally { text_levels --;}
The exception there is <pre> blocks which really do contain newlines.
Next level down is formatted text, which can appear in places like link captions:
formatted_text @init { text_levels++; } : ( (formatted_text_elem) => formatted_text_elem ((nbsp_before_punctuation) => nbsp_before_punctuation)* ((printing_ws) => printing_ws)? )+; finally { text_levels --; }
formatted_text_elem: ( (accidental_magic_link) => accidental_magic_link | ((punctuation_before_nbsp)=> punctuation_before_nbsp) | (APOSTROPHES) => bold_and_italics | angle_tag | ((html_entity) => html_entity) | unformatted_characters );
And the very lowest level is unformatted characters:
unformatted_characters: (html_dangerous |punctuation |meaningless_characters |digits );
Anyway, when I say "feature complete", most of the major features that I know of are present in some form. None of them is complete in itself (except perhaps images), but it's a start.
So what next: suggestions for more features to add would be handy.Also, I need to get around to making it do more than just generate an AST. Theoretically it's not too much work to take the ASP and spit out some kind of XHTML.
It would also be nifty if someone could figure out a way of embedding wikitext into the grammar to mark it up somehow. Does section inclusion work yet? If so, would it be possible to insert comments somehow that would allow other pages to transclude sections? Then some of the documention could be stored outside the grammar itself, yet shown alongside...
Steve
"Steve Bennett" stevagewp@gmail.com wrote in message news:b8ceeef70802110836h6f999caeyfd3c70dc20458699@mail.gmail.com...
It would also be nifty if someone could figure out a way of embedding wikitext into the grammar to mark it up somehow. Does section inclusion work yet? If so, would it be possible to insert comments somehow that would allow other pages to transclude sections? Then some of the documention could be stored outside the grammar itself, yet shown alongside...
Or how about embedding comments that transclude other pages?
E.g.
{{/images}}
This will include [[Markup_spec/ANTLR/draft/images]].
If that can be embedded in an ANTLR comment then the page source will be the ANTLR code, but it will be rendered with in-line documentation.
- Mark Clements (HappyDog)
On 2/12/08, Mark Clements gmane@kennel17.co.uk wrote:
Or how about embedding comments that transclude other pages?
Ah. I guess. That would leave one big monolithic page, which is what I was partially hoping to avoid.
I also don't quite get the <source> tag that David Gerard added up the top. Anyone know where this is documented/implemented?
Steve
On 2/12/08, Steve Bennett stevagewp@gmail.com wrote:
I also don't quite get the <source> tag that David Gerard added up the top. Anyone know where this is documented/implemented?
That's the syntax highlighting extension:
http://www.mediawiki.org/wiki/Extension:SyntaxHighlight_GeSHi
On 2/12/08, Stephen Bain stephen.bain@gmail.com wrote:
That's the syntax highlighting extension:
http://www.mediawiki.org/wiki/Extension:SyntaxHighlight_GeSHi
Ah ha! Thanks. And this page:
http://en.wikipedia.org/wiki/Special:Version
answers my next question, which was going to be "just how flaming many of these extensions are they, and how difficult will it be to cope with them?"
Fortunately, they all (<categorytree>, <charinsert>, <fundraising>, <fundraisinglogo>, <hiero>, <imagemap>, <inputbox>, <poem>, <pre>, <ref>, <references>, <source> and <timeline>) seem to conform to the basic XML style of tags.
I'll need to encode some behaviour like "hiero" and "source" are opaque blocks, but the contents of <ref> needs to be parsed.
To continue my rambling it looks like there are three general types of <xml> tags:
- HTML tags permitted by mediawiki (<b>, <table>...) - Native mediawiki tags (<nowiki>, <pre>, <html>, <gallery>, any others?) - Extensions (<ref> etc)
Should be ok to deal with.
Steve
Steve Bennett wrote:
On 2/12/08, Stephen Bain stephen.bain@gmail.com wrote:
That's the syntax highlighting extension:
http://www.mediawiki.org/wiki/Extension:SyntaxHighlight_GeSHi
Ah ha! Thanks. And this page:
http://en.wikipedia.org/wiki/Special:Version
answers my next question, which was going to be "just how flaming many of these extensions are they, and how difficult will it be to cope with them?"
Please do not even try to cover individual extension (there were more than 700 alone on mediawiki.org last i checked). The ones relevant come in two varieties:
1) "parser hook" extensions (aka tag hooks aka extension tags), which conform to a (fuzzy) xml syntax: <name foo="bar" bla=12 blubb>...</name>; The ... in between the tags should be completely opaque, the parser should skip everything up to the closing tag. There is no support for nesting, no expansion of templates or template parameters, nothing. Also, the the text *returned* by the extension is expected to be HTML, and should be passed through the generation stage untouched.
2) "parser functions" which conform to an extended template syntax: {{#name: param|param|param...}}; In this case, all parameters have to be fully parsed and expanded, so this needs to work: {{#foo:xx|{{#bar|{{{bla|frob}}}}}|{{something}}}}
The output of parser functions may be wikitext that has to be further processed in context (just as if it where a normal template), or it may be HTML that has to be passed through (and a few more minor options). This is determined by each extension when registering the hook.
These two types of structures should be handeled by the parser, and stored asa structure for further processing by extensions (if no extension handles them, they should be re-assembled into plain text).
Extensions may also introduce arbitrary magic words. Such extensions are impossible to make compatible with a new ANTRL based parser, they would have to be rewritten as plugins to such a parser. Would it be possible to allow such plugins? I'm thinking of allowing a way for extensions to redifine individula bits of the grammar.
Regards, Daniel
On 2/12/08, Daniel Kinzler daniel@brightbyte.de wrote:
- "parser hook" extensions (aka tag hooks aka extension tags), which conform to
a (fuzzy) xml syntax: <name foo="bar" bla=12 blubb>...</name>; The ... in between the tags should be completely opaque, the parser should skip everything up to the closing tag. There is no support for nesting, no expansion of templates or template parameters, nothing. Also, the the text *returned* by the extension is expected to be HTML, and should be passed through the generation stage untouched.
The trouble there is that <ref> for example can contain wikitext...which needs to be parsed. e.g.:
<ref>''The origin of species'', Darwin</ref>
So at a minimum I think we would need to distinguish those extensions whose internal text needs to be parsed?
- "parser functions" which conform to an extended template syntax:
{{#name: param|param|param...}}; In this case, all parameters have to be fully parsed and expanded, so this needs to work: {{#foo:xx|{{#bar|{{{bla|frob}}}}}|{{something}}}}
The output of parser functions may be wikitext that has to be further processed in context (just as if it where a normal template), or it may be HTML that has to be passed through (and a few more minor options). This is determined by each extension when registering the hook.
Afaik, these are converted by the preprocessor (recently rewritten by Tim), and are completely invisible to the parser?
Extensions may also introduce arbitrary magic words. Such extensions are impossible to make compatible with a new ANTRL based parser, they would have to be rewritten as plugins to such a parser. Would it be possible to allow such plugins? I'm thinking of allowing a way for extensions to redifine individula bits of the grammar.
It depends a bit on the limits of these "arbitrary magic words". I think it's actually suprisingly feasible to allow magic words that, say, consist of strings of letters surrounded by space, or certain predefined punctuation.
At first I thought that would be a nightmare, but in practice it isn't. As the second last rule before rendering a string of letters literally, I would simply add a (Java/PHP) check to see if the string matched any registered extension, and parse it as an extension magic word instead. Here's how that happens with __TOC__ etc:
magic_word: UNDERSCORE UNDERSCORE magic_word_text UNDERSCORE UNDERSCORE -> ^(MAGIC_WORD magic_word_text);
magic_word_text: {is_magic_word()}? letters;
@members { .... boolean is_magic_word() { return input.LT(1).getText().equalsIgnoreCase("NOTOC") || input.LT(1).getText().equalsIgnoreCase("TOC") || input.LT(1).getText().equalsIgnoreCase("FORCETOC") || input.LT(1).getText().equalsIgnoreCase("NOGALLERY") || input.LT(1).getText().equalsIgnoreCase("NOEDITSECTION") ; }
}
It would only be a problem if the contents of the magic word interfered with the lexer - say a combination of letters and other punctuation. But if the available combinations were predefined (eg, hyphen hyphen letters digit hyphen hyphen) then they can be dealt with, and the letters themselves defined at runtime.
Steve
Steve Bennett wrote:
...
The trouble there is that <ref> for example can contain wikitext...which needs to be parsed. e.g.:
<ref>''The origin of species'', Darwin</ref>
So at a minimum I think we would need to distinguish those extensions whose internal text needs to be parsed?
No. If a tag-style extension wants to support wiki text, it has to explicitly invoke a new parser pass on the text contained between the tags. The text MUST NOT be parsed/transformed before being passed to the extension, and what the extension returns must not be parsed either (the latter is only partially true for the current parser, but i would call that a bug, not a feature - see bug 8997).
- "parser functions" which conform to an extended template syntax:
...
Afaik, these are converted by the preprocessor (recently rewritten by Tim), and are completely invisible to the parser?
I don't know. I don't see why parser functions should be handeled by the preprocessor while tag hooks are not. But maybe this is so.
magic_word: UNDERSCORE UNDERSCORE magic_word_text UNDERSCORE UNDERSCORE -> ^(MAGIC_WORD magic_word_text);
...
It would only be a problem if the contents of the magic word interfered with the lexer - say a combination of letters and other punctuation. But if the available combinations were predefined (eg, hyphen hyphen letters digit hyphen hyphen) then they can be dealt with, and the letters themselves defined at runtime.
Magic words don't have to have the form __XXX__ - they can be characterized by any regular expression. Consider how ISBN and RFC are treated - those are magic words too... Oh and please consider that the patterns are frequently localizable (and are thus maintained in mediawiki's messages files): French, for example, allows __AUCUNETABLE__ for __NOTOC__. The same goes for #REDIRECT btw: dutch allows #DOORVERWIJZING, etc...
I'm not entirely sure if extensions are free to define magic words using *any* pattern, but I think this is so. MagicWord.php is entirely regex-based. Which would mean that either your parser will only support some types of magic words, or it needs a way to hook into the actual grammar.
Oh, and "variables" like {{PAGENAME}} are treated as magic words internally, though that wouldn't have to be so. I would probably use the template mechanism, and simply intercept the use of special names.
-- Daniel
On Feb 13, 2008 10:08 AM, Daniel Kinzler daniel@brightbyte.de wrote:
Oh, and "variables" like {{PAGENAME}} are treated as magic words internally, though that wouldn't have to be so. I would probably use the template mechanism, and simply intercept the use of special names.
Which reminds me: What about parser functions? Are they in the "draft 10"?
Magnus
The trouble there is that <ref> for example can contain wikitext...which needs to be parsed. e.g.:
<ref>''The origin of species'', Darwin</ref>
So at a minimum I think we would need to distinguish those
extensions
whose internal text needs to be parsed?
No. If a tag-style extension wants to support wiki text, it has to explicitly invoke a new parser pass on the text contained between the tags. The text MUST NOT be parsed/transformed before being passed to the extension, and what the extension returns must not be parsed either (the latter is only partially true for the current parser, but i would call that a bug, not a feature - see bug 8997).
A <ref> essentially changes the output destination of the parser.
If your building a XHTML DOM document , the ref handler just needs to switch the output destination to <li> of a references list, and lets the parser continue. </ref> resets it back to where ever it was.
And when see a <references/> tag the list is inserted into the main document.
That's how I've implemented it anyway.
Jared
A <ref> essentially changes the output destination of the parser.
If your building a XHTML DOM document , the ref handler just needs to switch the output destination to <li> of a references list, and lets the parser continue. </ref> resets it back to where ever it was.
And when see a <references/> tag the list is inserted into the main document.
That's how I've implemented it anyway.
Jared
This is how you can implement the extensions functionality. But that is not the goal. The goal is to provide an interface for the existing implementation of the extension to be plugged into.
That is, the grammar should NOT know about <ref>, not what it does, not even that it exists. It should simply have a facility that allows externam (php) code to handle the characters (unchanged!) between (some specific) tags.
-- Daniel
-----Original Message----- From: wikitext-l-bounces@lists.wikimedia.org [mailto:wikitext-l-bounces@lists.wikimedia.org] On Behalf Of Daniel Kinzler Sent: 13 February 2008 14:57 To: Wikitext-l Subject: Re: [Wikitext-l] Draft 10 published
A <ref> essentially changes the output destination of the parser.
If your building a XHTML DOM document , the ref handler
just needs to
switch the output destination to <li> of a references list,
and lets
the parser continue. </ref> resets it back to where ever it was.
And when see a <references/> tag the list is inserted into the main document.
That's how I've implemented it anyway.
Jared
This is how you can implement the extensions functionality. But that is not the goal. The goal is to provide an interface for the existing implementation of the extension to be plugged into.
Your not going to get 100% compatibility moving from the multiple search/replace method into a single parse.
Hooks embedded within the parser, like InternalParseBeforeLinks, ParserBeforeTidy become impossible to do.
That is, the grammar should NOT know about <ref>, not what it does, not even that it exists. It should simply have a facility that allows externam (php) code to handle the characters (unchanged!) between (some specific) tags.
Agreed, the grammar should know how to pass and correct tag soup style HTML/XML that gets handed off to deal with.
Jared
Your not going to get 100% compatibility moving from the multiple search/replace method into a single parse.
Hooks embedded within the parser, like InternalParseBeforeLinks, ParserBeforeTidy become impossible to do.
True. I was thinking of "clean" tag hooks and parser functions. These should continue to work wit ha minimum of modification. I don't mind the black magic braking.
That is, the grammar should NOT know about <ref>, not what it does, not even that it exists. It should simply have a facility that allows externam (php) code to handle the characters (unchanged!) between (some specific) tags.
Agreed, the grammar should know how to pass and correct tag soup style HTML/XML that gets handed off to deal with.
Yes, though for the parser, there are three cases to consider for HTML/XML style tags:
1) (whitelisted) HTML tags, which can occur "soupy", and are more or less passed through (or "tidied" into valid xhtml). 2) Other tags (potentially handled by an extension) which must match in pairs exactly and cause the parser to take anything *inbetween* LITERALLY, and pass it to the extension for processing. 3) In case there is no such extension, it needs to go back, read the *tags* literally, and then parse the text between the tags.
There's even a fourth case, namely magic tags like <nowiki> that have to be known to the parser for special handling - these may also include <includeonly>, <onlyinclude> and <noinclude>, though those might be handled by the preprocessor, i'm not sure about that.
In the case of (some!) parser functions, it has to be considered that the *output* of the extension would have to be parsed to, inlined. But that stuff is probably handled by the preprocessor - if that is indeed the case, there's nothing to worry about.
-- Daniel
On 2/14/08, Daniel Kinzler daniel@brightbyte.de wrote:
Yes, though for the parser, there are three cases to consider for HTML/XML style tags:
- (whitelisted) HTML tags, which can occur "soupy", and are more or less passed
through (or "tidied" into valid xhtml). 2) Other tags (potentially handled by an extension) which must match in pairs exactly and cause the parser to take anything *inbetween* LITERALLY, and pass it to the extension for processing. 3) In case there is no such extension, it needs to go back, read the *tags* literally, and then parse the text between the tags.
There's even a fourth case, namely magic tags like <nowiki> that have to be known to the parser for special handling - these may also include <includeonly>, <onlyinclude> and <noinclude>, though those might be handled by the preprocessor, i'm not sure about that.
My grammar almost does all this - I just need to make extensions opaque, which is easy. Except 3) is really the default anyway, there is no "going back" as such.
I'm not dealing with <includeonly> etc yet - assuming they're preprocessor. Am I wrong?
Steve
-----Original Message----- From: wikitext-l-bounces@lists.wikimedia.org [mailto:wikitext-l-bounces@lists.wikimedia.org] On Behalf Of Daniel Kinzler Sent: 13 February 2008 22:30 To: Wikitext-l Subject: Re: [Wikitext-l] Draft 10 published
Your not going to get 100% compatibility moving from the multiple search/replace method into a single parse.
Hooks embedded within the parser, like InternalParseBeforeLinks, ParserBeforeTidy become impossible to do.
True. I was thinking of "clean" tag hooks and parser functions. These should continue to work wit ha minimum of modification. I don't mind the black magic braking.
That is, the grammar should NOT know about <ref>, not what
it does,
not even that it exists. It should simply have a facility
that allows
externam (php) code to handle the characters (unchanged!) between (some specific) tags.
Agreed, the grammar should know how to pass and correct tag
soup style
HTML/XML that gets handed off to deal with.
Yes, though for the parser, there are three cases to consider for HTML/XML style tags:
- (whitelisted) HTML tags, which can occur "soupy", and are
more or less passed through (or "tidied" into valid xhtml). 2) Other tags (potentially handled by an extension) which must match in pairs exactly and cause the parser to take anything *inbetween* LITERALLY, and pass it to the extension for processing. 3) In case there is no such extension, it needs to go back, read the *tags* literally, and then parse the text between the tags.
All tag attributes are parsed Santizer::decodeTagAttributes() I believe so things like attributes with missing values <foo bar> are possible for all tags.
In 2, not sure they must always be matched in pairs. Think somewhere (possibly in Parser::extractTagsAndParams()) allows unterminated tags to run to the end of the text.
3, unrecoginised tags should just cause the parser to output a < and carry on parsing.
There's even a fourth case, namely magic tags like <nowiki> that have to be known to the parser for special handling - these may also include <includeonly>, <onlyinclude> and <noinclude>, though those might be handled by the preprocessor, i'm not sure about that.
I believe (haven't looked into it or implemented yet) that the onlyinclude and noinclude are essentially filters that occur at transclusion time. Includeonly is a filter at save time? Preventing a template being associated with a category for example.
In the case of (some!) parser functions, it has to be considered that the *output* of the extension would have to be parsed to, inlined. But that stuff is probably handled by the preprocessor - if that is indeed the case, there's nothing to worry about.
-- Daniel
Wikitext-l mailing list Wikitext-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitext-l
On 2/15/08, Jared Williams jared.williams1@ntlworld.com wrote:
3, unrecoginised tags should just cause the parser to output a < and carry on parsing.
That's what it does at the moment.*
The ANTLR code looks like this:
ANGLE_TAG: { !in_noparse}? => ((ANGLE_TAG_OPEN_ACTUAL) => ANGLE_TAG_OPEN_ACTUAL |(ANGLE_TAG_CLOSE_ACTUAL) => ANGLE_TAG_CLOSE_ACTUAL | '<' { $type=LT; } );
Basically, if it can't match a whole tag, then it just calls the token an LT (less than) and moves on.
Steve
*Well, apart from a pesky lexing problem I thought I had already solved, where <nob generates an error.
Basically, if it can't match a whole tag, then it just calls the token an LT (less than) and moves on.
If I understand correctly, that'S for an *incomplete* tag, i.e. a lone "<". We were talking about *unknown* tag or pair of tags, such as "<foo>bla</foo>", which is nither html, nor magic, nor handled by an extension.
-- Daniel
In 2, not sure they must always be matched in pairs. Think somewhere (possibly in Parser::extractTagsAndParams()) allows unterminated tags to run to the end of the text.
might be, though it's debatabel if that'S a bug or a feature. anyway, content-less tags (like <foo/>) are possible too.
3, unrecoginised tags should just cause the parser to output a < and carry on parsing.
basically, yes, if the parse itself has that knowledge. otherwise this would have to be done in some kind of post-processing step, after looking at which tags can be dealt with by extensions. that'S what i meant by "going back".
-- Daniel
On 2/14/08, Daniel Kinzler daniel@brightbyte.de wrote:
That is, the grammar should NOT know about <ref>, not what it does, not even that it exists. It should simply have a facility that allows externam (php) code to handle the characters (unchanged!) between (some specific) tags.
Right. The grammar will recognise the concept of an xml-style tag, then use a lookup at runtime to decide whether that particular tag is to be treated literally or fed to an extension. That's easy to do in the ANTLR framework. That assumes, of course, that the extension can work purely on the basis of the input it's fed, and doesn't need access to the rest of the document in whatever state...
Steve
Daniel Kinzler wrote:
Steve Bennett wrote:
magic_word: UNDERSCORE UNDERSCORE magic_word_text UNDERSCORE UNDERSCORE -> ^(MAGIC_WORD magic_word_text);
...
It would only be a problem if the contents of the magic word interfered with the lexer - say a combination of letters and other punctuation. But if the available combinations were predefined (eg, hyphen hyphen letters digit hyphen hyphen) then they can be dealt with, and the letters themselves defined at runtime.
Magic words don't have to have the form __XXX__ - they can be characterized by any regular expression. Consider how ISBN and RFC are treated - those are magic words too... Oh and please consider that the patterns are frequently localizable (and are thus maintained in mediawiki's messages files): French, for example, allows __AUCUNETABLE__ for __NOTOC__. The same goes for #REDIRECT btw: dutch allows #DOORVERWIJZING, etc...
I'm not entirely sure if extensions are free to define magic words using *any* pattern, but I think this is so. MagicWord.php is entirely regex-based. Which would mean that either your parser will only support some types of magic words, or it needs a way to hook into the actual grammar.
I think they more or less can. But that could be restricted. The few people using magic words will have replicated its format, so if you're using a magic word not in __XXX__ form you're out.
On 2/13/08, Daniel Kinzler daniel@brightbyte.de wrote:
No. If a tag-style extension wants to support wiki text, it has to explicitly invoke a new parser pass on the text contained between the tags. The text MUST NOT be parsed/transformed before being passed to the extension, and what the extension returns must not be parsed either (the latter is only partially true for the current parser, but i would call that a bug, not a feature - see bug 8997).
So, the parse sequence for:
* <ref> '''blah'''</ref>
basically goes: 1. Parse bullet and find <ref>...</ref> 2. Pass <ref> chunk to extension. 3. Extension processes <ref> chunk, calls parser to process the bold tags, returns something with <b>blah</b> 4. Parser continues on...
Magic words don't have to have the form __XXX__ - they can be characterized by any regular expression. Consider how ISBN and RFC are treated - those are magic words too... Oh and please consider that the patterns are frequently localizable
No they're not. Quite specifically, they're not - the key words (ISBN, RFC, PMID) are hardcoded into the parser code and not internationalisable. I call them "magic links" in my grammar.
(and are thus maintained in mediawiki's messages files): French, for example, allows __AUCUNETABLE__ for __NOTOC__. The same goes for #REDIRECT btw: dutch allows #DOORVERWIJZING, etc...
That's ok - I'd forgotten that the #REDIRECT word is a magic word though.
I'm not entirely sure if extensions are free to define magic words using *any* pattern, but I think this is so. MagicWord.php is entirely regex-based. Which would mean that either your parser will only support some types of magic words, or it needs a way to hook into the actual grammar.
Yes, as I discussed, there will need to be restrictions on the form of magic words, which is not a bad thing anyway.
Oh, and "variables" like {{PAGENAME}} are treated as magic words internally, though that wouldn't have to be so. I would probably use the template mechanism, and simply intercept the use of special names.
I'm a bit unclear on the meaning and current processing of the things involving curly braces. Can someone help me out here:
* {{template}} - totally handled by preprocessor? *{{{1}}} - template parameter, totally handled by preprocessor? *{{PAGENAME}} - "magic" variable? Where is it handled? Does it have to be caps? *{{foo:blah}} - parser function? Where is it handled? *{{defaultsort:blah}} - same question
Any others?
Currently I'm handling these: * __TOC__ etc (magic words) * #REDIRECT * ISBN, PMID, RFC (magic links)
Steve
On 12/02/2008, Steve Bennett stevagewp@gmail.com wrote:
I also don't quite get the <source> tag that David Gerard added up the top. Anyone know where this is documented/implemented?
I put that in as the only way I could find to get the ANTLR grammar to render properly. <pre> and <nowiki> don't seem to have been sufficient.
- d.
wikitext-l@lists.wikimedia.org