On Mon, Feb 17, 2014 at 11:18 PM, Philip Neustrom philip@localwiki.orgwrote:
The latest Snowden docs have some great screenshots of the NSA-internal MediaWiki installation Snowden is alleged to have obtained a lot of his material from:
https://firstlook.org/theintercept/article/2014/02/18/snowden-docs-reveal-co...
Looks like a static HTML dump, as a few of the external extension images haven't loaded.
The last details on their technical infrastructure indicated that Snowden used "web crawler" (love the quotes) software to obtain information from their internal wiki:
http://www.nytimes.com/2014/02/09/us/snowden-used-low-cost-tool-to-best-nsa....
What's not mentioned in the NYT piece is that their MediaWiki instance likely didn't have any read-only ACLs set up, or if they did they were buggy (are any of the third-party ACL extensions good?) -- which was perhaps one reason why Snowden was able to access the entire site once he had any access at all?
"If you actually need fancy read restrictions to keep some of your own people from reading each others' writing, MediaWiki is not the right software for you." -brion.
..like, if you're a nation-state's intelligence agency, or something :P
I think it's fascinating that this technical decision[1] by the MediaWiki team long ago may have had such an impact on the world! And much more fascinating that the NSA folks may not have read the docs.
There's a good article about this on the Washington Post web site (< http://www.washingtonpost.com/blogs/monkey-cage/wp/2014/02/10/how-the-911-co...). The author argues that the choice of software that facilitates discovery and collaboration was deliberate, motivated by the 9/11 Commission Report, which attributed intelligence failures to lack of effective knowledge-sharing in the intelligence community.