Nicolas-
The aim will be, ultimately, to make an offline Wikipedia, that could be distributed on CD/DVD/...
This is a great idea, and several people have been working on similar ideas.
What I don't get is why you want to parse the raw wikitext. This will be quite a PITA unless you also bundle PHP, texvc etc. It would be much easier to use the existing parser code to create a static HTML dump from the wikisource, and use something like swish-E ( www.swish-e.org ) to index that HTML dump. This would give you more time to focus on what actually matters, i.e. the user interface.
As I recall Tim is working on something like that. Maybe while none of these projects is very far along yet, these efforts could be merged into one?
Besides Tim, there's Magnus, who has hacked together his stand alone webserver/wiki engine for Windows, and there's an existing alpha quality static HTML dumper called terodump by Tero Karvinen: http://www.hut.fi/~tkarvine/tero-dump/
So how about it - Magnus, Tim, Tero, Nicolas, do you think you could work together on one solution?
Regards,
Erik