Akash Mehta wrote: [fixed top posting]
On 9/9/06, Gregory Maxwell gmaxwell@gmail.com wrote:
On 9/8/06, Akash Mehta draicone@gmail.com wrote:
I'm sure it wouldn't be very expensive. All that needs to be done is that the external (non-interwiki) link code passes URLs through a script on a server that counts the number of clicks to that particular link and then passes the user on to the appropriate site, which would probably be specified through GET. I'll code it if I have time today, its very simple in php, and using a database means that statistics can be generated VERY easily.
Turning every external link follow into a database write on Wikipedia is simply not going to be acceptable.
We don't currently log HTTP GETs for a reason...
Thats the only problem. We can't AWB every single link in the database, then AWB it back just for the purpose of a study, which is why we would have to somehow do it at software parser level for a day or so. And we could easily host the file on toolserver and handle redirects / logging from there.
The toolserver is overworked enough as it is, thankyou very much.
I have spare low traffic mysql db server we could use. Then again, it would be simpler to build the functionality into the Wikipedia toolbar, and thats a violation of privacy, so it would have to be optional, and there'd be a clearly rigged demographic left of those who allowed the logging to take place...
Logging of such things can /always/ be circumvented if you try hard enough.