On Wed, Nov 30, 2011 at 7:39 PM, Andreas K. <jayen466(a)gmail.com> wrote:
If the image filter uses a user-specific personal
filter list stored on the
Foundation's server, that would assume that the censor can populate the
user's list without the user noticing, can prevent the user from emptying
their PFL again, and can disable the user's ability to click on a hidden
image to reveal it. Is there something that we could do to make that more
difficult, or impossible? Because then any censor would be back to square
one, left to their own devices, rather than being able to ride piggy-back
on our filter function.
Two misconceptions there. A genuine dyed-in-the-wool censor wouldn't
give two figs about whether the user trying to access non-conformant
material was aware or not of being restricted from accessing it.
Secondly, if there is no forcibly programmed barrier, of course, a user
can just bypass a soft barrier, but censors rarely use soft barriers, they
tend to be a bit more hardnosed about it.
One of my principle objections about the whole idea of doing anything
remotely along the lines of what the board resolutions and board meeting
minutes appear to reveal about their approach -- more direct speech from
that direction would be welcome, of course, so we do not work under a
misapprehension about their real goals -- is that creating the structured
informational web of potentional controversial content knowledge-base
is a highly complex task (I would say impossible, but there seems to be
a viewpoint in the direction of the WMF that it could be done) and having
made such a creature purely by the actions of a community who in fact
are largely philosophically opposed to doing any such thing, then we as
a community would be morally obligated to try to mitigate any attempts
to subvert the use of such a knowledge-base. Which, I do assure you,
might not prove to be a trivial task.
Jussi-Ville Heiskanen, ~ [[User:Cimon Avaro]]