I've started my work at BEIC[1] and yesterday I've had a sort of
revelation: their work on METS structural maps is the exact librarian
equivalent of what we do at Wikisource with ProofreadPage[2] page
transclusions. It's clear we can learn a lot from BEIC.
See yourself, this is a scan where every image was manually mapped to
the structure of pages, "chapters" (which are also OPAC entries) etc. on
the left. Doesn't it immediately make you think of the Index namespace
and <pagelist /> tag?[3]
<http://131.175.183.1/view/action/nmets.do?DOCCHOICE=2244270.xml&dvs=1411563…>
All this is based on an open standard, METS, and its only required
section, the "structural map" (of the digital document).
<https://en.wikipedia.org/wiki/METS#The_7_sections_of_a_METS_document>
Apparently, no other digital library does this job. BEIC and Wikisource
may be the only ones in the world and of course they don't share a
standard. :( Even the BEIC viewer is a local "hack" on top of ExLibris
Primo, I/they don't know if there's any free software for METS. I didn't
find any mention of METS in "our places" so I run to tell you all.
Nemo
[1] Biblioteca europea di informazione e cultura, where I'm currently a
wikimedian in residence: https://it.wikipedia.org/wiki/Progetto:GLAM/BEIC
No article outside it.wiki yet, see
http://cat.inist.fr/?aModele=afficheN&cpsidt=27893199 (I can provide
other sources if you want to write something about BEIC).
[2] https://www.mediawiki.org/wiki/Extension:Proofread_Page
[3] <https://en.wikisource.org/wiki/Help:Beginner%27s_guide_to_Index:_files>
It quietly happened in the mediawiki release last week {git #f0d86f92
- Add additional interwiki links as requested in various bugs (bug
16962, bug 21915)} [1] that the mul: interwiki has appeared for the
wikisources to be able to point to wikisource.org. So we can now do
[[:mul:XXXXXXxxxxxx]] and it appears in interwiki links as "More
languages". We will have a number of places, especially in the project
namespace where we need to do some additions.
Now to see how we can get mul: working for Wikidata.
Regards, Billinghurst
[1] https://www.mediawiki.org/wiki/MediaWiki_1.24/wmf22#WikimediaMaintenance
I recently added two recommandations for improvement of MediaViewer here:
https://www.mediawiki.org/wiki/Multimedia/Media_Viewer#Recommendations_for_…
These my pretty exotic requests listed into "Bugs" section (I don't know if
I posted them into the right section):
* While supporting multiple page djvu, to implement a tool to copy into
clipboard with a click from displayed djvu page: (1) pure text layer, (2)
lisp-like mapped text layer structure at line or word detail, (3) xml
mapped text-layer (Alex brollo)
* While supporting multiple page djvu, to implement a tool to crop and
download illustrations (as jpg or png) from image djvu layer of current
page (Alex brollo)
The access to those two tools could enhance a lot, IMHO, full usability of
data wrapped into our beloved djvu files. Some js tool could manage mapped
djvu text with interesting results. Many from us can get the same data by a
local djvuLibre application, but getting data from MediaViewer could avoid
the use of local command-line applications and make easier access to those
interesting, so far unused data for anyone.
Alex