Scraperwiki is about playing with data (like a cool Excel), but WikiTeam extracts page histories and images. It is unrelated.
We surpassed 3,000 preserved wikis yesterday http://code.google.com/p/wikiteam/wiki/AvailableBackups and is quickly growing. We upload the dumps to Internet Archive, that folks know a bit about long-term preservation.
Wiki preservation is part of my research on wikis, and later I want to compare these wiki communities with Wikipedia. I'm open to suggestions.
Just wow... Thank you WikiTeam and task force! Is scraperwiki involved? SJOn Tue, Aug 7, 2012 at 5:18 AM, emijrp <firstname.lastname@example.org> wrote:
I think this is the first time a full XML dump of Citizendium is publicly available (CZ offers dumps but only the last revision for each article, and our previously efforts generated corrupted and incomplete dumps). It contains 168,262 pages and 753,651 revisions (9 GB, 99 MB in 7z). I think it may be useful for researchers, including quality analysis.
It was generated using WikiTeam tools. This is part of our task force to make backups of thousands of wikis around the Internet.
--Emilio J. Rodríguez-Posada. E-mail: emijrp AT gmail DOT comPre-doctoral student at the University of Cádiz (Spain)Projects: AVBOT | StatMediaWiki | WikiEvidens | WikiPapers | WikiTeamPersonal website: https://sites.google.com/site/emijrp/
Wiki-research-l mailing list
Samuel Klein @metasj w:user:sj +1 617 529 4266