Hi,
I would appreciate some feedback on how to link index pages on Wikidata and
how to store text quality:
https://www.wikidata.org/wiki/Wikidata_talk:Wikisource#Index_pages_and_text…
With Wikidata for Commons ramping up, we are getting to the point where
sharing metadata through Wikidata will be finally possible.
Cheers,
Micru
Hello,
Please join us on the next Wikimedia bug day:
**2014-10-08, 14:00–22:00 UTC** [1] in #wikimedia-tech on Freenode IRC.[2]
We will be triaging bug reports for the Collection extension (Book tool)
in general and PDF export in particular, which were just switched to a
new backend (OCG).[3] We have two immediate goals:
1) recover 100 % of the relevant reports from the defunct PediaPress
tracker;[4]
2) get a clean list of known PDF issues that the new backend didn't fix.
Everyone is welcome to join any time these weeks, and no technical
knowledge is needed! It's an easy way to get involved or to give
something back.
We encourage you to record your activity on the etherpad [4].
This information and more can be found here:
https://www.mediawiki.org/wiki/Bug_management/Triage/201410
For more information on triaging in general, check out
https://www.mediawiki.org/wiki/Bug_management/Triage
I look forward to seeing you there. Please distribute further by email,
talk pages etc. (Collection is used on almost 2 thousands wikis!)
Sorry for the crossposting,
Nemo
[1] Timezone converter: http://everytimezone.com/#2014-10-08,120,5x1
[2] See http://meta.wikimedia.org/wiki/IRC for more info on IRC chat
[3] http://lists.wikimedia.org/pipermail/wikitech-l/2014-July/077867.html
[4] https://etherpad.wikimedia.org/BugTriage-Collection
I've started my work at BEIC[1] and yesterday I've had a sort of
revelation: their work on METS structural maps is the exact librarian
equivalent of what we do at Wikisource with ProofreadPage[2] page
transclusions. It's clear we can learn a lot from BEIC.
See yourself, this is a scan where every image was manually mapped to
the structure of pages, "chapters" (which are also OPAC entries) etc. on
the left. Doesn't it immediately make you think of the Index namespace
and <pagelist /> tag?[3]
<http://131.175.183.1/view/action/nmets.do?DOCCHOICE=2244270.xml&dvs=1411563…>
All this is based on an open standard, METS, and its only required
section, the "structural map" (of the digital document).
<https://en.wikipedia.org/wiki/METS#The_7_sections_of_a_METS_document>
Apparently, no other digital library does this job. BEIC and Wikisource
may be the only ones in the world and of course they don't share a
standard. :( Even the BEIC viewer is a local "hack" on top of ExLibris
Primo, I/they don't know if there's any free software for METS. I didn't
find any mention of METS in "our places" so I run to tell you all.
Nemo
[1] Biblioteca europea di informazione e cultura, where I'm currently a
wikimedian in residence: https://it.wikipedia.org/wiki/Progetto:GLAM/BEIC
No article outside it.wiki yet, see
http://cat.inist.fr/?aModele=afficheN&cpsidt=27893199 (I can provide
other sources if you want to write something about BEIC).
[2] https://www.mediawiki.org/wiki/Extension:Proofread_Page
[3] <https://en.wikisource.org/wiki/Help:Beginner%27s_guide_to_Index:_files>
It quietly happened in the mediawiki release last week {git #f0d86f92
- Add additional interwiki links as requested in various bugs (bug
16962, bug 21915)} [1] that the mul: interwiki has appeared for the
wikisources to be able to point to wikisource.org. So we can now do
[[:mul:XXXXXXxxxxxx]] and it appears in interwiki links as "More
languages". We will have a number of places, especially in the project
namespace where we need to do some additions.
Now to see how we can get mul: working for Wikidata.
Regards, Billinghurst
[1] https://www.mediawiki.org/wiki/MediaWiki_1.24/wmf22#WikimediaMaintenance
I recently added two recommandations for improvement of MediaViewer here:
https://www.mediawiki.org/wiki/Multimedia/Media_Viewer#Recommendations_for_…
These my pretty exotic requests listed into "Bugs" section (I don't know if
I posted them into the right section):
* While supporting multiple page djvu, to implement a tool to copy into
clipboard with a click from displayed djvu page: (1) pure text layer, (2)
lisp-like mapped text layer structure at line or word detail, (3) xml
mapped text-layer (Alex brollo)
* While supporting multiple page djvu, to implement a tool to crop and
download illustrations (as jpg or png) from image djvu layer of current
page (Alex brollo)
The access to those two tools could enhance a lot, IMHO, full usability of
data wrapped into our beloved djvu files. Some js tool could manage mapped
djvu text with interesting results. Many from us can get the same data by a
local djvuLibre application, but getting data from MediaViewer could avoid
the use of local command-line applications and make easier access to those
interesting, so far unused data for anyone.
Alex
:[
Please help recheck the old bugs listed at
https://etherpad.wikimedia.org/p/BugTriage-mwlib, mostly filed by
Wikisource/Wikibooks users.
Nemo
-------- Messaggio inoltrato --------
Oggetto: [Wikitech-ambassadors] Changes to PDF export; ZIM/EPUB will be
disabled soon
Data: Wed, 24 Sep 2014 19:01:38 -0700
Mittente: Erik Moeller
Rispondi-a: Coordination of technology deployments across languages/projects
Hi folks,
A change from our legacy PDF export infrastructure to a more
maintainable system has been a long time coming. Thanks to the work of
C. Scott Ananian, Matt Walker, Max Semenik, Brad Jorsch, and the
Parsoid/Services teams, we're close to disabling the old PDF rendering
and enabling the new as default.
The old "mwlib" (Reportlab-based) PDF service will be disabled on
September 29 [1], and the new "ocg" (Parsoid/XeLaTex-based) PDF
service will be enabled on the same date. The output looks very
different (two column LaTeX-generated output) and there are a few more
customization options, as well (visible when you use the "Create a
book" feature). This renderer has much improved rendering of non-latin
scripts and fixes many issues of the old PDF service.
There are still a number of bugs to work through, as well, which we
will do as a low priority. Most noticeably, tables are a mess - and
since we have such a large variety of them, that's a pretty long tail.
This is not a high priority project for us. We're looking for
co-maintainers, and we have some ideas how to improve the architecture
to make it easy for the community to optimize for different outputs --
if you're interested in joining the development effort, let us know
via the new services list (
https://lists.wikimedia.org/mailman/listinfo/services ), wikitech-l or
on #mediawiki-services on irc.freenode.net.
Please report any bugs against the "OCG" product in Bugzilla.
https://bugzilla.wikimedia.org/enter_bug.cgi?product=OCG
If you want to more quickly test prior to deployment, you can add this
one-liner to your common.js/global.js, which will point the "Download
as PDF" link in the sidebar to the new renderer:
$('#coll-download-as-rl a').each(function() { this.href =
this.href.replace(/([&?]writer=)rl(&|$)/g, "$1rdf2latex$2") });
As part of this change, we will disable ZIM and EPUB export for the
time being. If you're interested in working on ZIM or EPUB support for
the new offline content generator, or other export formats, please let
us know via the above channels.
Thanks,
Erik
[1] See https://wikitech.wikimedia.org/wiki/Deployments for the
planned deployment window
--
Erik Möller
VP of Engineering and Product Development, Wikimedia Foundation
_______________________________________________
Wikitech-ambassadors mailing list
Wikitech-ambassadors(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-ambassadors