This will be dependent on your server configuration. Note that robots.txt works on URL prefixes, so you need a reliable way of distinguishing plain view hits from other URLs.
I'd just make the example work if the wiki is in the root folder for the site. That'd be enough to give most people a starting place if nothing else. And I think you can distinguish the ones that need to be ignored by the '?' in the URL. Are there any pages that should be ignored that don't have the '?'?
(The meta tags already tell search engines not to index edit pages and other special pages, and not to continue spidering from them, but won't prevent the initial hit to load that page.)
Many search engines ignore those meta tags.