I'm building a small pywikibot tool which is designed to be installed
via pip (and in turn installs Pywikibot via pip).
The tool uses the page.touch() function which is where I get a
pywikibot.i18n.TranslationError when I run it.
page.touch() gets it's edit summary from i18n.twtranslate(self.site,
'pywikibot-touch') which in turn is defined in /scripts/i18n/pywikibot/.
Unless I'm confused the Error occurs because the pip distribution does not
include the /scripts folder or the i18n submodule.
So my first question is am I just doing something obviously wrong and the
i18n submodule should have been available over pip as well?
If it's not just me then would it not make sense to have any i18n files
necessary to the Pywikibot *library* to also be distributed via the same
pip package? (i18n for scripts is another issue since for scripts you
cannot use pip).
André / Lokal_Profil
André Costa | Chief Operating Officer, Wikimedia Sverige |
Andre.Costa(a)wikimedia.se | +46 (0)733-964574
Stöd fri kunskap, bli medlem i Wikimedia Sverige.
Läs mer på blimedlem.wikimedia.se
sent from my mobile, all typos are due to autocorrect ;)
I’m assisting our local Museums and Galleries on a project to open up around 4,000 images as CC-0) via Commons. We picked a bad time to do it (Pattypan being borked).
I’m a PyWikiBot noob - although I have some long-term familiarity with Python. I’m trying to work out if using PWB might be a route to get these images onto Commons.
I have both downloaded image files and URLs which I can point a script at - as well as good metadata for them.
I’ve been looking at pre-canned PWB scripts and see that data_ingestion *might* do the trick.
I see that it is in /scripts/archive/
Is this still a viable script - or is it deprecated in some way?
Does anyone have a guide for using it beyond the comments at the top of the script?
I had a look at /tests/data/csv_ingestion.csv and it looks kind of bare - as I’d expect more fields etc. I’d rather construct something more like the metadata fields that I’d use with Patypan if using that - rather than be faced with 4,000 files uploaded and have to add metadata to them in a separate process or *shudder* manually.
Any suggestions (including ‘don’t do this’) with explanations would be welcome please.