One idea is
the proposal to install the AbuseFilter in a global mode,
i.e. rules loaded at Meta that apply everywhere. If that were done
(and there are some arguments about whether it is a good idea), then
it could be used to block these types of URLs from being installed,
even by admins.
Identifying client side generated urls from server side opens up a whole
lot of problems of its own. Basically you need a script that runs in a
hostile environment and reports back to a server when a whole series of
urls are injected from code loaded from some sources (mediawiki-space)
but not from other sources user space), still code loaded from user
space through call to mediawiki space should be allowed. Add to this
that your url identifying code has to run after a script has generated
the url and before it do any cleanup. The url verification can't just
say that a url is hostile, it has to check it somehow, and that leads to
reporting of the url - if the reporting code still executes at that
moment. Urk...
Hmm? There's no reason to do anything like that. The AbuseFilter would
just prevent sitewide JS pages from being saved with the particular URLs
or a particular code block in them. It'll stop the well-meaning but
misguided admins. Short of restricting site JS to the point of
uselessness, you'll never be able to stop determined abusers.
--
Alex (wikipedia:en:User:Mr.Z-man)