On Thu, Jul 22, 2010 at 6:40 PM, Neil Kandalgaonkar neilk@wikimedia.org wrote:
Pushing censorship to the browser means that we have to reimplement it where ever our content is viewed -- including mobile sites and other alternative ways of browsing Wikipedia and sister sites. But that seems like it's doable, particularly since you're exploiting CSS classes.
It would be nice if we could do it on the server side, but it seems infeasible. Even if we didn't have to worry about cache fragmentation, we're still talking about serving many versions of a page based on user preference, so it won't work well on sites that aren't using MediaWiki. However, once we have good-quality categorization of offensive images, third parties could always do their own blocking.
Blurring seems a bit deluxe to me -- it's probably adequate to just block the image and show something in its place with the same dimensions. (At Flickr, they use an image of greyish-black static for this).
This was already pointed out. I just didn't update the proposal. Using a stock image rather than blurring is both safer (you can't see anything about the image), and easier to implement.
On Thu, Jul 22, 2010 at 6:56 PM, Neil Kandalgaonkar neilk@wikimedia.org wrote:
Half of Aryeh's proposal is about generating CSS rules on the fly. But imagine if you could load content-blocking CSS like we have for skins today.
I imagine a system that lets you pick which categories to block, and then either creates or reuses a simple CSS file that can be cached forever.
It can't be cached at all, if it depends on user preferences.
So the CSS could set visibility:hidden for the offending content first. Then *if* the JS manages to run you get the click-through-to-view interface.
So the fallback for no JS would be that some images mysteriously disappear with no reason given, and there's no way to access them? I'd personally be happy with that, but I think the anti-censorship people would have issues with it.