[Wikitext-l] HTML security

Steve Bennett stevagewp at gmail.com
Thu Nov 22 03:34:27 UTC 2007


MediaWiki makes a general contract that it won't allow "dangerous"
HTML tags in its output. It does this by making a final parse fairly
late in the process to clean HTML tag attributes, and to escape any
tags it doesn't like, and unrecognised &entities;.

Question is: should the parser attempt to do this, or assume the
existence of that function?

For example, in this code>

<pre>
preformatted text with <nasty><html><characters> and &entities;
</pre>

Should it just treat the string as valid, passing it out literally
(and letting the security code go to work), or should it keep parsing
characters, stripping them, and attempting to reproduce all the work
that is currently done?

Would the developers (or users, for that matter) be likely to trust a
pure parser solution? It seems to me that it's a lot easier simply to
scan the resulting output looking for bad bits, than it is to attempt
to predict and block off all the possible routes to producing nasty
code.

On the downside, if the HTML-stripping logic isn't present in the
grammar, then it doesn't exist in any non-PHP implementations...

What do people think?

Steve



More information about the Wikitext-l mailing list