[Mediawiki-l] parser / renderer
Rowan Collins
rowan.collins at gmail.com
Thu Dec 23 12:52:02 UTC 2004
On Wed, 22 Dec 2004 17:28:14 +0100, Baeckeroot alain
<al2.baeckeroot at laposte.net> wrote:
> We only need to read once the text, extract WIKIMARKS and put
> that in the right link table, but NOT render the html.
>
> The parser (should be renamed parse&render ?) is very complicated
> to understand, so i need help to find when the link stuff is done
> and skip all the html stuff.
Because the "parser" isn't a real structured parser, you can't really
do just parts - the links are checked in
Parser::replaceInternalLinks(), but before you get there, you've got
to have done things like removing <nowiki></nowiki> sections,
transcluding {{templates}}, and probably other things I haven't
thought of. So the fact that it generates HTML while it's doing it is
a relatively small price (and of course makes things *simpler* for 99%
of cases where the "parser" is used), the big overhead is probably
getting there in the first place...
[That said, if you can find a non-ugly way of doing it, you could
perhaps avoid some particularly expensive steps - like rendering the
HTML for image links, as long as you don't mess with the
semi-recursive part of replaceInternalLinks() that deals with links
inside captions.]
--
Rowan Collins BSc
[IMSoP]
More information about the MediaWiki-l
mailing list