On Tue, Mar 31, 2009 at 5:57 PM, David Gerard dgerard@gmail.com wrote:
<snip>
(In image search, Google and all other search engines still suck. Here's to tagging coming to Commons.)
Isn't that because people don't label, keyword or otherwise tag images properly? If they did, then Google would be able to find them and provide a good search facility. It might also be because lots of images are locked up in websites that only allow internal searches (though some are Google-able).
It would take something really spectacular to eclipse it; machine summarisation might do it, but I suspect even the machines will be thumbing the wikipedia over to find out what's important and for a place to start their research ;-)
Data on Wikipedia will tend to become more machine-readable. Templates are mostly a good idea.
The worry there is that overuse of templates raises the barrier for humans to contribute. The trick is to harness the powers of both humans and machines, and make sure they work together and don't get in each other's way. But that's been the case all along, right from the start of the Machine Age, and onwards into the Information Age. Leave the grunt work to machines. Let humans do the clever stuff. Teach machines to approximate what humans do, or run on data and input from humans.
The other worry is that humans coupled with machines can work at a rate that runs the human body into the ground. So you have to have things set up so the human can take a break and recharge itself. Less long sessions editing Wikipedia, and more targeted editing, adding more value-per-click (ugh, I can't believe I just said that).
Carcharoth