have you considered uploading to Internet Archive and then uploading to commons using IAupload tool? (this is the normal process for texts)
don't know if that can be automated
https://internetarchive.readthedocs.org/en/latest/cli.html https://tools.wmflabs.org/ia-upload/commons/init https://github.com/Tpt/ia-upload
i'm afraid i've only done them one at a time cheers
On Mon, Feb 15, 2016 at 7:18 AM, Olaf Janssen Olaf.Janssen@kb.nl wrote:
Hi,
I’m preparing an image donation of some 350 picture books from 1810 to 1880 (taken from the collection http://www.geheugenvannederland.nl/?/en/collecties/prentenboeken_van_1810_to... )
For every book I’ve constructed an XML file describing the pages (metadata). So eg. for a book of 20 pages I’ve an XML with 20 records. I can upload these in the normal way via the GWToolset webinterface, also assigning a Commons category to the book.
For 1 book that’s doable, but for 350 books I would need to upload 350 XML files, 1 by 1, using the GWT-webinterface (using the same json mapping file for all uploads). But this would take me a lot of time (and it’s rather boring)…
So I’m wondering if / how I could automate this. Is there a more direct/efficient way?
I can image that I could do some command line interfacing (Pywiki??), with the XML, the json-mapping and the target Commonscat-name as input parameters. Would that be an option?
Any tricks, tips & directions are very welcome
Met vriendelijke groet / With kind regards
Olaf Janssen
Wikipedia & open data coordinator
Koninklijke Bibliotheek - National Library of the Netherlands olaf.janssen@kb.nl
+31 (0)70 3140 388 @ookgezellig
www.slideshare.net/OlafJanssenNL
[image: Koninklijke Bibliotheek, National Library of the Netherlands] Prins Willem-Alexanderhof 5 | 2595 BE Den Haag Postbus 90407 | 2509 LK Den Haag | (070) 314 09 11 | www.kb.nl
English version http://www.kb.nl/en/email | Disclaimer http://www.kb.nl/disclaimer
Glamtools mailing list Glamtools@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/glamtools