MHart wrote:
If done in a way similar to Google's API, then ripping a key wouldn't matter much since the key is limiting. The client-side applications wouldn't directly access the webservice API, rather the client-side app provider would be an intermediary. Client apps query the host provider and that host queries Wikipedia's API. If the client app provider wants to support more than 1000 queries per day (Google's limit), then they'd need to pay for more queries. Can't be ripped anyway since the key lies with the provider, not the client.
Well, let's play through the scenario:
* User with KDE desktop fires up amaroK to play some tunes * Clicks for a Wikipedia article lookup on an artist * amaroK contacts a server that someone(?) runs for amaroK * amaroK server contacts Wikipedia server * Data is sent back to amaroK server * Data is sent back to amaroK client * Happy!
Now, while the amaroK server <-> Wikipedia server is locked by a secret key, the amaroK client <-> amaroK server probably isn't. Anybody can make a request to the amaroK server, claiming to be amaroK -- an abuse then DoSs the entire amaroK user base when it hits the maximum requests for the amaroK key.
On Wikipedia's side, the implementation and intermediary APIs and architecture don't matter. Just implement user key webservices API with query limits, and you're done.
Well, that just pushes the problem from one server to another. It doesn't change the overall analysis I gave.
-- brion vibber (brion @ pobox.com)