Hi all;
Just like the scripts to preserve wikis[1], I'm working in a new script to download all Wikimedia Commons images packed by day. But I have limited spare time. Sad that volunteers have to do this without any help from Wikimedia Foundation.
I started too an effort in meta: (with low activity) to mirror XML dumps.[2] If you know about universities or research groups which works with Wiki[pm]edia XML dumps, they would be a possible successful target to mirror them.
If you want to download the texts into your PC, you only need 100GB free and to run this Python script.[3]
I heard that Internet Archive saves XML dumps quarterly or so, but no official announcement. Also, I heard about Library of Congress wanting to mirror the dumps, but not news since a long time.
L'Encyclopédie has an "uptime"[4] of 260 years[5] and growing. Will Wiki[pm]edia projects reach that?
Regards, emijrp
[1] http://code.google.com/p/wikiteam/ [2] http://meta.wikimedia.org/wiki/Mirroring_Wikimedia_project_XML_dumps [3] http://code.google.com/p/wikiteam/source/browse/trunk/wikipediadownloader.py [4] http://en.wikipedia.org/wiki/Uptime [5] http://en.wikipedia.org/wiki/Encyclop%C3%A9die
2011/6/2 Fae faenwp@gmail.com
Hi,
I'm taking part in an images discussion workshop with a number of academics tomorrow and could do with a statement about the WMF's long term commitment to supporting Wikimedia Commons (and other projects) in terms of the public availability of media. Is there an official published policy I can point to that includes, say, a 10 year or 100 commitment?
If it exists, this would be a key factor for researchers choosing where to share their images with the public.
Thanks, Fae -- http://enwp.org/user_talk:fae Guide to email tags: http://j.mp/faetags
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l