Hello,
I just subscribed (I'm the wikipedia user At18) to ask about the automatic html dump function. I see from the database page that it's "in development".
If anyone is interested, I have a rudimental Perl script that is capable of reading the downloadable SQL dump and output all the articles as separate files in a number of alphabetical directories. It's not very fast, but it works.
What's missing from the script: wikimarkup -> HTML conversion, some intelligence to autodetect redirects, dealing with images, and so on. I don't know if someone is in charge of this fuction. If so, I can post the script. Otherwise, I can further develop it myself, given some directions.
Alfio