Some neat features could perhaps be implemented outside the wiki.cgi framework using a cron job. For example, a neat feature would be to go through all the pages of the site, looking for the "most requested pages", i.e. the terms that are wiki-linked the most, but for which there exists no page.
This would be useful because it would tell us about major "holes" in our coverage.
Other similar "counting" stuff could be handled by the same script... such as "How many articles greater than 100 words long?" "Which existing pages are most linked?"
I have been trying to write a couple of little command-line perl scripts to do such things, but I am getting very lost. :-)