On Thu, May 01, 2003 at 08:10:38AM -0700, Brion Vibber wrote:
On Wed, 2003-04-30 at 21:48, Tim Starling wrote:
Someone needs to run this:
UPDATE cur SET cur_random=RAND()
Ok, it'll be a bit before it's done. Hooray for big slow databases. :D
-- brion vibber (brion @ pobox.com)
Off the top of my head, I can't think of any simple mathematical way to do want you want to do. (that being making articles selected randomly less likely to be selected randomly over time) Even if every cur_random is totally re-randomized, it still isn't going to be fair.
Seems to me that the best way to do this is either: 1) Be truly random. In PostgreSQL, this would be something like: SELECT cur_id from cur LIMIT 1 OFFSET random(SELECT count(cur_id) FROM cur) (this function should be really fast, not sure if faster than:) SELECT cur_id from cur LIMIT 1 OFFSET random(SELECT count(*) FROM cur) (but you get the idea, may need other constraints) 2) Keep a timestamp instead of a random number. That way, whenever an article is "randomly" selected, it gets its timestamp updated to the current time. Always select the oldest article for a "random" page. New articles always get a current timestamp here.