-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Pedro Fayolle wrote: | I've been coding a wiki parser in JavaScript with the hope it could be | of some use for the project (especially in giving some relief to the | servers). [snip] | I think this could be useful for quick previews, avoiding extra server hits.
Honestly, I don't think this is a viable path.
Preview is valuable because it produces *exactly* the output that the wiki does. A JavaScript work-alike parser is unlikely to work exactly the same even in the best case, and isn't going to provide for extensions at all without invoking the PHP parser on the server.
Having two parallel parsers is also a bad practice, introducing extra work in maintaining them both and keeping them in sync. Someday we may have a real working 'alternate' parser, but if so it's going to have to prove itself worth the effort of maintaining it; as it is we have enough trouble with sloppily-written pages not working when copied to another wiki that's not running with tidy to clean up HTML kinks, and that's just a post-processing phase rather than the parser itself.
There's some experimental code in 1.5 for fetching previews via an XmlHTTPRequest, which avoids the skin rendering overhead (and in many cases should avoid message cache initialization overhead) required for a full HTML page submission while letting the PHP parser return real rendering results. Most of the time spent in rendering non-trivial pages is concentrated in a few hotspots (particularly title normalization, link checking, and link generation) and IMO optimization effort would be better spent on these.
This is not to say that a JavaScript wikitext parser is useless; but I don't think we would be able to use it for things like previews.
- -- brion vibber (brion @ pobox.com)