Lars Aronsson a écrit :
It is increasingly common to add books to Wikisource by finding a PDF or Djvu file, uploading it to Commons, and then to create an Index: page on Wikisource for proofreading.
But this would be much easier if:
- The fields (author, title, etc.) of the Index
page were filled in from the data already given on Commons. (Yes, those could be wrong or need additional care, but this could always be edited afterwards, if initial values are fetched from Commons.)
the problem is that djvu pages on common do not have a parsable format.
- The <pagelist/> tag was already in the
"pages" box.
that's easy. I did it for sites using http://wikisource.org/wiki/MediaWiki:IndexForm.js
- All pages were created in automatically
with the OCR text from Commons, instead of leaving a long list of red links. (This would require the text for each page to be extracted, something that pdftotext can do in seconds, but Commons takes weeks to do.)
I do not understand what you mean. What you describe is _already_ implemented : when a page is created, its text is extracted from the text layer of the corresponding djvu or pdf. All you need to do is create djvu files with a proper text layer.