Pedro Fayolle wrote:
I think it is better demonstrated than explained, so I'll do my best to get a work in progress up for a few hours later today.
Basically, it's a wikitext to XML recursive decent (almost proper) parser and XML to XHTML converter. From the XML I'm generating a DOM identical to the usual mediawiki one and using the existing stylesheets, so it mostly looks the same as the PHP interface.
It doesn't just use XMLHTTP, each page has it's own URL, so the address bar changes and everything is bookmarkable. But the browser only receives a stub, and builds the page itself. The page is built bit-by-bit so the user can start reading the first part while the rest is being built.
Editing has real-time previews, although I'm still ironing out a few bugs there. Previews are done without any HTTP requests etc.
So it's just like what I was planning to do, only done right :o)
I can't wait to see it working. BTW, can it be "plugged" into any running MediaWiki, like, say, Wikipedia? Or does it need it's own MediaWiki set up?
Sort of. It runs on top of a mediawiki, but because of the XMLHTTP security model it has to be on the same domain as the wiki, so I can't just put it up on my box as a gateway to wikipedia. I wouldn't let it make edits to wikipedia until it has proved stable anyway, but it would have been nice if I could have done a read-only gateway.
At this point I'm not aiming to roll it out on a major wiki, there's too much that you can't do from the client side (most the special pages for a start). That situation might change but it would require quite a lot of work to make the server return unpresented special pages. The aim is to demonstrate serving of dynamic services from very low spec web servers, because almost everything is static.
Wish you best of lucks on this, sounds truly amazing.
Pedro