Hello jdd,
wget
I also tried wget, but there seem to be too many links on these wiki
pages and so it tooks hours to follow each of them. Using some excludes
could not really reduce the amount of pages downloaded.
Do you have a wget example for getting these pages excluding all the
links to "edit" and "version" pages?
I've already thought about a PHP script that gets all existing articles
from mySQL, generates URLs for them and outputs these f. e. as WGET
command lines the user can execute himself. But I even do not know where
to get these articles from mySQL.
Sebastian