Hi Seb, I answer personally since I'm the fellow most engaged into djvu exploration in it.source group. <br><br><br><div class="gmail_quote">2011/2/20 Seb35 <span dir="ltr"><<a href="mailto:seb35wikipedia@gmail.com">seb35wikipedia@gmail.com</a>></span><br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">Hi Andrea,<br>
<br>
I saw VIGNERON and Jean-Frédéric today and we spoke about that. Jean-Fred<br>
and I are a bit skeptical about the effective implementation of such a<br>
system, here are some questions that I (or we) were asking: (the questions<br>
are listed by order of importance.)<br>
<br>
- how much books have such coordinates? I know the Bnf-partnership-books<br>
have such coordinates because originally in the OCR files (1057 books),<br>
but on WS a lot of books have non-valid coordinates (word 0 0 1 1 "")<br>
because Wikisourcians didn't know what was the meaning of these figures<br>
(DjVu format is quite difficult to understand anyway); I don't know if<br>
classical OCR have a function to indicate the coordinates of future<br>
ocerized books<br></blockquote><div>Coordinates come from OCR interpretation. All Internet Archive books have them, both into djvu file layer and into djvu.xml file. You can verify the presence of coordinates simply with djView; open the file, go into View, select Display->Hidden text and, if coordinate esist, you'll see word text superimposed to word images. <br>
<br>You can't get coordinates from a final user OCR program as FineReader 10; you've to use professional versions, such as OCR engines written to mass, automatized batch OCR routines. <br> </div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<br>
- what is the confidence in the coordinates? if you serve an half-word, it<br>
will be difficult to recognize the entire word<br></blockquote><div>The confidence of coordinates is extremely high. Coordinates calculation is the first step of any OCR interpretation, so if you get a decent OCR interpretation, this means that coordinate calculation is absolutely perfect. Obviuosly you'll find wrong coordinates in any case where you find a wrong OCR interpretation.<br>
</div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<br>
- I am asking how you can validate the correctness of a given word for a<br>
given person: a person (e.g.) creates an account on WS, a Captcha is asked<br>
with a word, how do you know if his/her answer is correct? I aggree this<br>
step disapears if you ask a pool of volunteers to answer to differents<br>
captcha-word, but in this cas it resumes to the classical check of<br>
Wikisourcians in a specialized way to treat particular cases<br></blockquote><div>There are different strategies, all based on a complete automation of user interpretation.<br># classical: submit two words, one known as control, the other unknown. Exact interpretation of known word validates the interpretation of the unknown one.<br>
# alternative: ask for more than one interpretaton of the unknown word from different users/sessions/days. Validate the interpretation when matching. <br> </div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<br>
- you give the example of a ^ in a word, but how do you select the<br>
OCR-mistakes? althought this is not really an issue since you can yet make<br>
a list of current mistakes and it will be sufficient in a first time. I<br>
know French Wikisourcians (at least, probably others also) already make a<br>
list of frequent mistakes ( II->Il, 1l->Il, c->e ...), sometimes for a<br>
given book (Trévoux in French of 1771 it seems to me).<br></blockquote><div>FineReader OCR applications use the character ^ for uninterpretable characters. Other tricks to find "probably wrong" words can be imagined, matching words with a dictionary. Usual "scannos" are better managed with different routines, by javascript or python; i.e. you can wrap them into a Regex Menu Framework clean up routine (see Clean up routine used bu [[en:User:Inductiveload]] or postOCR routine into RegexMenuFramework gadget of it.source, just built from Inductiveload Clean up routine). Wikicaptcha would manage unusual OCR mistakes, not usual ones. <br>
</div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<br>
But I know Google had a similar system for their digitization, but I don't<br>
know exactly the details. For me there are a lot of details which makes<br>
the global idea difficult to carry out (although I would prefer think the<br>
contrary), but perhaps has you some answers.<br></blockquote><div>Unluckily, Google doesn't share OCR mappings of its OCRs, it shares only the "pure text". This is one of sound reasons that encourage to upload Google pdfs into Internet Archive, so getting their "derivation", t.i. publication of a djvu derived file with text layer from another (usually good) OCR interpretation. <br>
</div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Sébastien<br>
<br>
PS: I had another idea in a slightly different application field (roughtly<br>
speaking automated validation of texts) but close of this one, I write an<br>
email next week about that (already some notes in<br>
<<a href="http://wikisource.org/wiki/User:Seb35/Reverse_OCR" target="_blank">http://wikisource.org/wiki/User:Seb35/Reverse_OCR</a>>).<br></blockquote><div>I'll take a look with great interest. </div></div><br>
Alex brollo<br>