On Thu, Jun 4, 2009 at 6:20 AM, John at
Darkstar<vacuum(a)jeb.no> wrote:
User privacy on Wikipedia is is close to a public
hoax, pages are
transfered unencrypted and with user names in clear text. Anyone with
access to a public hub is able to intercept and identify users, in
addition to _all_ websites that are referenced during an edit on
Wikipedia through correlation of logs.
This only works for getting info on totally random Wikipedia users,
who happen to edit using your router. This isn't a serious compromise
of privacy for practical purposes due to the resources required to get
info on a large number of users, or to target a specific user. Users
who are concerned about this, however, can use
secure.wikimedia.org.
Either you have privacy for _all_ users or you have none. If you accept
lesser privacy for some users, at random, several stat aggregation
schemes are possible. Downside is that you have to decide that some
users in fact have less privacy from time to time.
So if you want real privacy against MITMs, you still
need to use
something like Tor, as usual.
Attacks on Tor is way outside the scoope of this discussion but it is
possible for this kind of sites.
On Thu, Jun 4, 2009 at 12:53 PM, Robert
Rohde<rarohde(a)gmail.com> wrote:
One idea is the proposal to install the
AbuseFilter in a global mode,
i.e. rules loaded at Meta that apply everywhere. If that were done
(and there are some arguments about whether it is a good idea), then
it could be used to block these types of URLs from being installed,
even by admins.
No, it wouldn't.
document.write('<script' + ' src="' + 'http://www.go' +
'ogle-an' +
'alytics.com/urc' + 'hin.js"
type="text/javascript"></script>');
Obviously more complicated obfuscation is possible. JavaScript is
Turing-complete. You can't reliably figure out whether it will output
a specific string.
You can run a script to inspect the dom-three for external urls and
report back if something suspicious are found but it is highly error
prone and can easily be defeated.
John