Tom Muylle wrote:
$result=wfQuery("SELECT cur_title FROM cur WHERE cur_namespace = 0 and cur_is_redirect=0 and cur_random > 0", DB_READ, $fname); $min=1; $max=wfNumRows($result); if ($max > 0) { wfDataSeek($result,rand($min,$max)-1);
[snip]
As you can see, it takes a random tuple from the ones returned by the query.
The trouble with this would be that mysql would collect hundreds of thousands of titles (on our largest wikis) and send them all back to PHP. That's a lot of unnecessary overhead for just one title!
On my test box with a copy of en.wikipedia.org:
mysql> select count(cur_title) from cur where cur_namespace=0 and cur_is_redirect=0 and cur_random > 0; +------------------+ | count(cur_title) | +------------------+ | 287328 | +------------------+ 1 row in set (1 min 7.04 sec)
Having a precalculated random index makes it a very very fast easy operation, where the database's index on a constant lets us get to a random position without such fuss.
I'm sorry it's not optimized for the case where you have two pages. :)
You shouldn't be receiving a 404 page, though, you should just get the main page if there's nothing else. Can you demonstrate this exactly?
-- brion vibber (brion @ pobox.com)