What I don't get is why you want to parse the raw wikitext.
<snip>
Because who said the interface would use HTML to display things? :) The base library will have an abstract parser, and an HTML parser, but you could add a PDF parser, latex, you name it. So I/we don't want to work with HTML, but with wikitext. You could also envision the ability to export an article of the offline encyclopedia to PDF - and again, html probably isn't the right tool for that...
As I recall Tim is working on something like that. Maybe while none of these projects is very far along yet, these efforts could be merged into one?
Yes, I think it'd be best. But others may have diverging opinions ^_ Med & I posted a message on Meta:Babel (village pump of meta) a while ago, but no one replied, so we assumed there wasn't really any project like that. Didn't even think of asking on wikitech, shame :)
Nicolas
wikitech-l@lists.wikimedia.org