Text browser users won't see you important site notice. Better use the $wg Site Notice variable... Indeed, text browser users heard the closing news on the radio.
On 17 January 2012 16:40, jidanni@jidanni.org wrote:
Text browser users won't see you important site notice. Better use the $wg Site Notice variable... Indeed, text browser users heard the closing news on the radio.
http://en.m.wikipedia.org/ isn't showing the site notice either. IT REALLY NEEDS TO.
- d.
On Tue, 17 Jan 2012 08:40:10 -0800, jidanni@jidanni.org wrote:
Text browser users won't see you important site notice. Better use the $wg Site Notice variable... Indeed, text browser users heard the closing news on the radio.
- $wgSiteNotice is practically dead, we could probably even drop support for it. MediaWiki:Sitenotice is the thing to use. - Wikimedia uses dismissable sitenotices. Text browsers don't see notices anyways. - Do we even care that text browser users won't see a sopa warning?
On 18 January 2012 00:48, Daniel Friesen lists@nadir-seen-fire.com wrote:
- Do we even care that text browser users won't see a sopa warning?
We really should have warned mobile users.
- d.
On 18 January 2012 00:48, Daniel Friesen lists@nadir-seen-fire.com wrote:
- Do we even care that text browser users won't see a sopa warning?
http://stats.wikimedia.org/wikimedia/squids/SquidReportClients.htm says "not really" - lost in the noise.
- d.
Le Tue, 17 Jan 2012 17:40:10 +0100, jidanni@jidanni.org a écrit:
Text browser users won't see you important site notice. Better use the $wg Site Notice variable... Indeed, text browser users heard the closing news on the radio.
Yes this is working as expected. Pages are still served and the blackout is "just" an overlay above the text.
Ha, the blackout is a javascript powered blackout. No reason to bother text browser users, they won't notice the disruption in the first place!
Please don't remove $wgSiteNotice from Mediawiki. It's the only way we http://www.mediawiki.org/wiki/Manual:Wiki_family administrators can with one edit to LocalSettings.php put a notice on our many wikis, without editing each MediaWiki:Sitenotice etc. database changing operations.
DG> http://stats.wikimedia.org/wikimedia/squids/SquidReportClients.htm That table looks horrible in text browsers.
To make the redirect a javascript is not a good idea. At least 2,213,922 users will never see it
https://addons.mozilla.org/en-US/firefox/addon/noscript/?src=search
and maybe way, way more.
Today, virtually NONE of my (many) friends ever noticed something changed, until i told them.
And even with js, you can see the 'real' page popping up first before the redirect takes action. It does not feel like 'oops wikipedia is gone !!?' but more like just another anyyoing advert.
You should do a straightforward real shutdown instead, and deliver a fake 404 with explanation link. And for several more days.
And yes i would even block editing since this is also alerting people.
Michael, As was mentioned here, a 503 would be the most appropriate HTTP response code to serve. It would also prevent non-js users and text-only users from bypassing, it would avoid the flicker effect, and would cause search engines to correctly "back off" trying to index our pages. It would also correctly shut down all editing (if the site can not be read, why do you need "emergency" edit rights anyway?) I have no idea why this ineffective kludge was effected instead. On Jan 18, 2012 11:13 AM, "Michael" codejodler@gmx.ch wrote:
Today, virtually NONE of my (many) friends ever noticed something changed, until i told them.
And even with js, you can see the 'real' page popping up first before the redirect takes action. It does not feel like 'oops wikipedia is gone !!?' but more like just another anyyoing advert.
You should do a straightforward real shutdown instead, and deliver a fake 404 with explanation link. And for several more days.
And yes i would even block editing since this is also alerting people.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Please see http://meta.wikimedia.org/wiki/English_Wikipedia_SOPA_blackout/Technical_FAQ
Cheers, Erik
On 18 January 2012 18:20, Dan Collins dcollin1@stevens.edu wrote:
Michael, As was mentioned here, a 503 would be the most appropriate HTTP response code to serve. It would also prevent non-js users and text-only users from bypassing, it would avoid the flicker effect, and would cause search engines to correctly "back off" trying to index our pages. It would also correctly shut down all editing (if the site can not be read, why do you need "emergency" edit rights anyway?) I have no idea why this ineffective kludge was effected instead.
I don't understand why it was done this way either... the consensus was for a full blackout, so why not just set all the squids to redirect to a page with the banner and explanation along with a 503 status code and be done with it?
I was rather concerned by people thinking we need to allow "emergency access" - what kind of emergencies are going to mean people need Wikipedia? And is everyone having such an emergency going to have read the FAQ and know how to get around the blackout?
On 18/01/2012, Thomas Dalton thomas.dalton@gmail.com wrote: <snip
I was rather concerned by people thinking we need to allow "emergency access" - what kind of emergencies are going to mean people need Wikipedia? And is everyone having such an emergency going to have read the FAQ and know how to get around the blackout?
Speaking as one of the closers of the "RFC", some of the things we were thinking of were a DMCA notice, Legal needing to get something taken down right now or some other OFFICE-type action, removal of an obvious copyright violation, information that needed to be suppressed, or just something that went wrong from the technical end of things and needed fixing right away. Remember this has sort of been put together with baling wire and sealing wax, and we wanted to make sure to leave a door open for unforeseen situations where it was possible to take immediate action if required.
Risker/Anne
On 18 January 2012 19:32, Risker risker.wp@gmail.com wrote:
On 18/01/2012, Thomas Dalton thomas.dalton@gmail.com wrote: <snip
I was rather concerned by people thinking we need to allow "emergency access" - what kind of emergencies are going to mean people need Wikipedia? And is everyone having such an emergency going to have read the FAQ and know how to get around the blackout?
Speaking as one of the closers of the "RFC", some of the things we were thinking of were a DMCA notice, Legal needing to get something taken down right now or some other OFFICE-type action, removal of an obvious copyright violation, information that needed to be suppressed, or just something that went wrong from the technical end of things and needed fixing right away. Remember this has sort of been put together with baling wire and sealing wax, and we wanted to make sure to leave a door open for unforeseen situations where it was possible to take immediate action if required.
If the whole site is down, you don't really need to worry about takedown orders...
Even if there was a need for an OFFICE action, people in the office are just a short walk away from the ops team that can do whatever needs doing.
On 18 January 2012 16:40, Thomas Dalton thomas.dalton@gmail.com wrote:
On 18 January 2012 19:32, Risker risker.wp@gmail.com wrote:
On 18/01/2012, Thomas Dalton thomas.dalton@gmail.com wrote: <snip
I was rather concerned by people thinking we need to allow "emergency access" - what kind of emergencies are going to mean people need Wikipedia? And is everyone having such an emergency going to have read the FAQ and know how to get around the blackout?
Speaking as one of the closers of the "RFC", some of the things we were thinking of were a DMCA notice, Legal needing to get something taken down right now or some other OFFICE-type action, removal of an obvious copyright violation, information that needed to be suppressed, or just something that went wrong from the technical end of things and needed fixing right away. Remember this has sort of been put together with baling wire and sealing wax, and we wanted to make sure to leave a door open for unforeseen situations where it was possible to take immediate action if required.
If the whole site is down, you don't really need to worry about takedown orders...
Even if there was a need for an OFFICE action, people in the office are just a short walk away from the ops team that can do whatever needs doing.
Actually, you do need to worry about takedown orders - the site is *not* shut down, it's accessible through Mobile and through very simple, easily discoverable methods. It's just not editable. And the ops team is included in the "emergency" provisions. You don't think we haven't already had significant screaming about the tiny number of edits and actions that have taken place in the last 17 hours? Trust me on this...some people on this list may think that their actions are divorced from "the community", but "the community" doesn't see it that way at all. The back door is for you folks to do your work with the smallest number of pitchfork and torch marks possible.
Risker/Anne
On 18 January 2012 22:02, Risker risker.wp@gmail.com wrote:
Actually, you do need to worry about takedown orders - the site is *not* shut down, it's accessible through Mobile and through very simple, easily discoverable methods.
I know. That's what I'm saying I don't understand. If we're going to have a blackout, why not do it properly?
I'm curious what the current demographic/usage cases for text browsers are. I'm not asking this to undercut the argument, but as a developer hoping to improve the Wikipedia experience for as many users as possible. It's my understanding that blind users no longer use text browsers, but instead use screen readers with regular browsers (based on my one conversation with a blind Wikimedian). Who are the people using text browsers and why? What is the current text browsing experience like on Wikipedia? Do we serve the mobile version to text browsers or the regular version of the site? Is Lynx still the most popular text browser?
Sorry for my ignorance on this subject.
Ryan Kaldari
On 1/17/12 11:48 PM, jidanni@jidanni.org wrote:
Ha, the blackout is a javascript powered blackout. No reason to bother text browser users, they won't notice the disruption in the first place!
Please don't remove $wgSiteNotice from Mediawiki. It's the only way we http://www.mediawiki.org/wiki/Manual:Wiki_family administrators can with one edit to LocalSettings.php put a notice on our many wikis, without editing each MediaWiki:Sitenotice etc. database changing operations.
DG> http://stats.wikimedia.org/wikimedia/squids/SquidReportClients.htm That table looks horrible in text browsers.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
As was mentioned here, a 503 would be the most appropriate HTTP response code to serve. It would also prevent non-js users and text-only users from bypassing, it would avoid the flicker effect, and would cause search engines to correctly "back off" trying to index our pages. It would also correctly shut down all editing (if the site can not be read, why do you need "emergency" edit rights anyway?) I have no idea why this ineffective kludge was effected instead.
It's really easy to say how "easy" it is to do it the "right" way, but a lot harder to actually do it.
So you know, we were asked by the search engines to not change our response codes. They said it would just make their jobs harder. We aren't indexed like every other site.
With the short period of time we had to implement this, we wanted to make sure we didn't do anything that would have nasty repercussions. Doing things wrong can mean screwed up cache (which would have to be purged), being deindexed from search engines (and that's obviously bad), etc..
- Ryan
On Wed, Jan 18, 2012 at 5:19 PM, Ryan Lane rlane32@gmail.com wrote:
So you know, we were asked by the search engines to not change our response codes. They said it would just make their jobs harder. We aren't indexed like every other site.
Strange considering google said to use 503s so it would make their job easier.
https://plus.google.com/u/0/115984868678744352358/posts/Gas8vjZ5fmB
Wikipedia is not "like everyone else.". We were specifically told NOT to use 503 errors, because they index us different than the rest of the Internet.
Snt frm my iPhne
On Jan 18, 2012, at 3:32 PM, OQ overlordq@gmail.com wrote:
On Wed, Jan 18, 2012 at 5:19 PM, Ryan Lane rlane32@gmail.com wrote:
So you know, we were asked by the search engines to not change our response codes. They said it would just make their jobs harder. We aren't indexed like every other site.
Strange considering google said to use 503s so it would make their job easier.
https://plus.google.com/u/0/115984868678744352358/posts/Gas8vjZ5fmB
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Thomas,
On 18 January 2012 22:02, Risker risker.wp@gmail.com wrote:
Actually, you do need to worry about takedown orders - the site is *not* shut down, it's accessible through Mobile and through very simple, easily discoverable methods.
I know. That's what I'm saying I don't understand. If we're going to have a blackout, why not do it properly?
I second that.
Also, it seems as if the starting point of this discussion somehow gets lost.
It was about the javascript redirection simply not working for many million browser configurations - like Firefox running noscript, or IExplorer when extra secure configured (i.e. because you do not add sites to the whitelist when they do not mandatory require JS to work), and, to complete, textbrowsers. But the Firefox and IExlporer figures probably are some magnitudes higher. (I'm not familiar with Chrome, Opera, Safari etc but i guess they also have secure settings and plugins.)
I do not mind if you sent a 503 or 404 or whatever. You will work that out somehow and if it means search engines have to catch up some days then why not, is that actually such a catastrophe ? But i really don't know.
But the point was, this is not about how to HAVE still access to users, but about how to really NOT have access. Wasn't this the point of the protest, to start with ?
Strange considering google said to use 503s so it would make their job easier.
https://plus.google.com/u/0/115984868678744352358/posts/Gas8vjZ5fmB
Right. I read that post too. As I said, they told us not to change return codes. They told us so directly.
I think them telling us directly trumps a google+ post.
- Ryan
I second that.
Also, it seems as if the starting point of this discussion somehow gets lost.
It was about the javascript redirection simply not working for many million browser configurations - like Firefox running noscript, or IExplorer when extra secure configured (i.e. because you do not add sites to the whitelist when they do not mandatory require JS to work), and, to complete, textbrowsers. But the Firefox and IExlporer figures probably are some magnitudes higher. (I'm not familiar with Chrome, Opera, Safari etc but i guess they also have secure settings and plugins.)
I do not mind if you sent a 503 or 404 or whatever. You will work that out somehow and if it means search engines have to catch up some days then why not, is that actually such a catastrophe ? But i really don't know.
But the point was, this is not about how to HAVE still access to users, but about how to really NOT have access. Wasn't this the point of the protest, to start with ?
So, this will be my last comment on this.
In the time frame we had to implement this, it wasn't possible to do a 100% blackout that would have been completely impenetrable. There were a number of suggestions that could have blacked everything out completely, but very, very likely would have broken things in a way that would have lasted more than the blackout period. We have to consider:
1. Search engines 2. Our caches 3. Upstream caches 4. API users 5. Screen scrapers 6. Things we didn't have time to consider <-- this is a big one
The goal was to inform as many people as possible about the effects of the bills, and I think we were effective at doing so.
- Ryan
On 18 January 2012 23:59, Ryan Lane rlane32@gmail.com wrote:
So, this will be my last comment on this.
In the time frame we had to implement this, it wasn't possible to do a 100% blackout that would have been completely impenetrable. There were a number of suggestions that could have blacked everything out completely, but very, very likely would have broken things in a way that would have lasted more than the blackout period. We have to consider:
- Search engines
- Our caches
- Upstream caches
- API users
- Screen scrapers
- Things we didn't have time to consider <-- this is a big one
The goal was to inform as many people as possible about the effects of the bills, and I think we were effective at doing so.
The sites have gone down accidentally on numerous occasions and any user trying to access them just got an error message. The world didn't seem to end on any of those occasions...
"RK" == Ryan Kaldari rkaldari@wikimedia.org writes:
RK> I'm curious what the current demographic/usage cases for text browsers
I use "emacs-w3m", keeps my computer gunk-free, and shows me what an article might look like if I use plucker to pluck it to my Palm M505.
Only if an article looks bad in my text browser do I fire up Firefox.
Making sure articles look good in w3m ensures they are machine processable -- as you never know what kind of machine might be processing your article.
On Thu, Jan 19, 2012 at 11:00 AM, jidanni@jidanni.org wrote:
Making sure articles look good in w3m ensures they are machine processable -- as you never know what kind of machine might be processing your article.
No, Machine accessible interfaces should be using the API to access it... Screenscraping is bad Mkay...
Making sure articles look good in w3m ensures they are machine processable -- as you never know what kind of machine might be processing your article.
We have an API for machines. We don't support screen scraping.
- Ryan
On 19/01/12 01:10, Thomas Dalton wrote:
The sites have gone down accidentally on numerous occasions and any user trying to access them just got an error message. The world didn't seem to end on any of those occasions...
There is a difference between accidentally breaking the site and pulling the plug on purpose. We had outages of several hours, but unless the blackout, the sysadmins were working on fixing it since they learned about it. Plus, I think you would need to go to the early days of Wikipedia to find an outage where it was unavailable for so long.
On 19 January 2012 09:12, Platonides Platonides@gmail.com wrote:
On 19/01/12 01:10, Thomas Dalton wrote:
The sites have gone down accidentally on numerous occasions and any user trying to access them just got an error message. The world didn't seem to end on any of those occasions...
There is a difference between accidentally breaking the site and pulling the plug on purpose. We had outages of several hours, but unless the blackout, the sysadmins were working on fixing it since they learned about it. Plus, I think you would need to go to the early days of Wikipedia to find an outage where it was unavailable for so long.
No, there isn't a difference. A blackout where everyone sees a page with a particular message instead of the article they wanted is exactly the same as unscheduled downtime where everyone sees a page with a particular message instead of the article they wanted. If search engines and caches can survive one of them, they can survive both, since they are identical from an external perspective.
No, there isn't a difference. A blackout where everyone sees a page with a particular message instead of the article they wanted is exactly the same as unscheduled downtime where everyone sees a page with a particular message instead of the article they wanted. If search engines and caches can survive one of them, they can survive both, since they are identical from an external perspective.
I'm sorry. but this is silly. I have a hard time believing that you aren't simply trolling here.
- Ryan
On 18/01/12 14:44, Ryan Kaldari wrote:
I'm curious what the current demographic/usage cases for text browsers are. I'm not asking this to undercut the argument, but as a developer hoping to improve the Wikipedia experience for as many users as possible. It's my understanding that blind users no longer use text browsers, but instead use screen readers with regular browsers (based on my one conversation with a blind Wikimedian). Who are the people using text browsers and why? What is the current text browsing experience like on Wikipedia? Do we serve the mobile version to text browsers or the regular version of the site? Is Lynx still the most popular text browser?
Sorry for my ignorance on this subject.
Most of our bug reports about text browser support come from Jidanni, who has his own special reasons for using a text browser, as you can see in his reply.
We have had a few other people complain about text browser issues over the years. One such user told me that s/he used a text browser via SSH to a personal server, as a workaround for corporate network access policies denying access to the outside web.
Probably these two users are roughly representative of text browser users in general:
* A group who use a text browser as a strange personal choice * A group who use a text browser out of temporary technical necessity (ancient/broken hardware, restricted network access, etc.)
Certainly such users are extremely rare, neither w3m nor Lynx appear on the long list of User-Agent headers at
http://stats.wikimedia.org/wikimedia/squids/SquidReportClients.htm
So it follows that they make up less than 0.02% of requests.
-- Tim Starling
On 20 January 2012 01:06, Ryan Lane rlane32@gmail.com wrote:
No, there isn't a difference. A blackout where everyone sees a page with a particular message instead of the article they wanted is exactly the same as unscheduled downtime where everyone sees a page with a particular message instead of the article they wanted. If search engines and caches can survive one of them, they can survive both, since they are identical from an external perspective.
I'm sorry. but this is silly. I have a hard time believing that you aren't simply trolling here.
How is it silly? I'm not trolling, I just think the way the blackout was implemented looked really unprofessional and I can't see any good reason for not having done a better job. All we wanted was for anyone viewing any page on the site to see a particular static page rather than what they would usually see. That isn't difficult to do, as evidenced by the fact that it happens automatically whenever the site breaks.
On 20 January 2012 08:24, Thomas Dalton thomas.dalton@gmail.com wrote:
On 20 January 2012 01:06, Ryan Lane rlane32@gmail.com wrote:
No, there isn't a difference. A blackout where everyone sees a page with a particular message instead of the article they wanted is exactly the same as unscheduled downtime where everyone sees a page with a particular message instead of the article they wanted. If search engines and caches can survive one of them, they can survive both, since they are identical from an external perspective.
I'm sorry. but this is silly. I have a hard time believing that you aren't simply trolling here.
How is it silly? I'm not trolling, I just think the way the blackout was implemented looked really unprofessional and I can't see any good reason for not having done a better job. All we wanted was for anyone viewing any page on the site to see a particular static page rather than what they would usually see. That isn't difficult to do, as evidenced by the fact that it happens automatically whenever the site breaks.
But that wasn't what was wanted, Thomas. There was a specifically voiced desire to make *certain* pages accessible - such as the articles about SOPA and PIPA - and they were exempted from the blackout. That presented a different operational challenge than a total blackout would do. Given that priority one will always be "don't break the site", I think the team did about the best they could in the time they had, keeping in mind the impossibility of testing alternate solutions. It was just as important to be able to confidently bring the site back up after 24 hours as it was to go dark for those 24 hours.
Perhaps folks with additional recommendations might want to add them at the post-mortem page on Meta.[1]
Risker/Anne
[1] http://meta.wikimedia.org/wiki/English_Wikipedia_anti-SOPA_blackout/Post-mor...
On 1/19/12 8:06 PM, Ryan Lane wrote:
I'm sorry. but this is silly. I have a hard time believing that you aren't simply trolling here.
Personally, I decided to stop responding to Thomas Dalton some time ago. Many of his rants seem to be trolling. Typical college freshman, knows more than the rest of us.
Let's comment on the issue, not on the people
It's true that there wasn't lot of time to prepare this and that what Thomas asked for, replace live site using a static content, is not so easy. While there are many benefits (especially, mind the traffic - full content overwritten with extra banner) from having a static page instead of live site, replaced with javascript window I don't believe it was possible to prepare such a site so fast and properly test if shutdown of whole english wikipedia for that time, wouldn't break other systems. Also it would be much harder to keep the content of static site updated regarding to request made during the blackout.
I believe that operation team did a great work during the protest and I know myself that lot of them spent whole night just by keeping an eye on the whole cluster which was heavily impacted by this action and needed many immediate updates done by them.
On Fri, Jan 20, 2012 at 2:41 PM, William Allen Simpson < william.allen.simpson@gmail.com> wrote:
On 1/19/12 8:06 PM, Ryan Lane wrote:
I'm sorry. but this is silly. I have a hard time believing that you aren't simply trolling here.
Personally, I decided to stop responding to Thomas Dalton some time
ago. Many of his rants seem to be trolling. Typical college freshman, knows more than the rest of us.
______________________________**_________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/**mailman/listinfo/wikitech-lhttps://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Fri, 20 Jan 2012 05:24:44 -0800, Thomas Dalton thomas.dalton@gmail.com wrote:
On 20 January 2012 01:06, Ryan Lane rlane32@gmail.com wrote:
No, there isn't a difference. A blackout where everyone sees a page with a particular message instead of the article they wanted is exactly the same as unscheduled downtime where everyone sees a page with a particular message instead of the article they wanted. If search engines and caches can survive one of them, they can survive both, since they are identical from an external perspective.
I'm sorry. but this is silly. I have a hard time believing that you aren't simply trolling here.
How is it silly? I'm not trolling, I just think the way the blackout was implemented looked really unprofessional and I can't see any good reason for not having done a better job. All we wanted was for anyone viewing any page on the site to see a particular static page rather than what they would usually see. That isn't difficult to do, as evidenced by the fact that it happens automatically whenever the site breaks.
It is in fact difficult to do. The message that comes up when the site is down has nothing to do with what would be necessary to have the cluster serve out a sopa page.
The cluster is NOT designed to serve out something 'instead' of what it usually serves. The cluster is designed to serve Wikipedia's MediaWiki installation, period.
Error pages are served by the apaches, not the squids/varnishes. And we can't rely on that for the serving of a sopa page. An error page necessitates one of two interactions with the cache. Either the cache stores the contents of the error page and keeps serving that. Which obviously is NOT what one wants in the normal case, since the squid/varnish cache will be serving out an error page still after an issue has gone away. So naturally you'd expect that serving an error necessarily means that the cache is kept empty. But that's NOT what we want with a sopa page. If that happens then either we're still serving cached entries of Wikipedia articles when we're supposed to be serving a sopa page. Or EVERY request ends up bypassing the cache and hitting the apaches to get the uncached sopa page. Which is NOT an acceptable implementation of a sopa page because that kind of traffic bypassing the cache will kill the apaches and cripple the cluster. It would be like DDoSing Wikipedia's SOPA page.
So this means that a real sopa page would likely involve modifications of the caching configuration. Probably also something that involves purging the ENTIRE front end cache, both before and after the sopa setup. And naturally deployment of something that will serve the sopa page throughout the cluster. Potentially outside of the actual MediaWiki installation despite the fact that the cluster was only designed to handle the MediaWiki installation. And of course ops also needs to make sure that the cluster can even handle the traffic when all the cached entries disappear and piles of requests need to be made to the apaches to repopulate the cache. Then there is the issue of testing the whole thing before deployment.
So yes, the concept that a sopa page and error pages when the apaches can't handle traffic are identical is silly, very silly.
On 20 January 2012 15:22, Daniel Friesen lists@nadir-seen-fire.com wrote:
It is in fact difficult to do. The message that comes up when the site is down has nothing to do with what would be necessary to have the cluster serve out a sopa page.
Not only that, but also something I haven't seen anyone note in this thread: the gains are very small. The part of the population that uses text browsers is a) very small and b) technologically literate - i.e. most of them would already know about SOPA. Although there are more people using scriptblock, these people will be in general also quite technologically literate.
Remember - this banner was not there for us to see. It was for the average office worker, teacher and students. And we have all laughed at the reactions on twitter - reactions that only showed we have reached the target audience.
Best, Merlijn
Anne,
Many thanks that you were aware of creating a log page. But, given the many difficult aspects, and the importance of being prepared in the future, would it be exaggerated to think it could become a very huge discussion ?
Could it perhaps be wise to split and structure even more right from the beginning (like, one page for each issue, and clear distinction between technical, and community / decision pages) ?
While i imagine that distinction can not always be drawn clearly, but at least it should be tried. I am not that much into wiki culture but maybe it's possible to just copy the existing comments, and replace the existing content with just a table of contents of the subchapter pages.
More importantly, i wonder if that whole content is not more like the 'talk' side of a page that would be the 'article', that is, the extracted knowledge from all the comments. So why not start it as talk pages altogether.
-- Michal
Perhaps folks with additional recommendations might want to add them at the post-mortem page on Meta.[1]
Risker/Anne
[1] http://meta.wikimedia.org/wiki/English_Wikipedia_anti-SOPA_blackout/Post-mor...
wikitech-l@lists.wikimedia.org