Magnus Manske a écrit :
On Sat, Aug 14, 2010 at 8:49 PM, Thomas Voegtlin
Also, in on_body_scroll, you could avoid the for loop
$('#body').position()['scrollTop'] by the height of an
'fraid not - sometimes the rendered text runs longer than the image,
so the "row" can be higher than the image. Example:
(scroll down and you'll see it)
hmm, you are right ; I had a "pure scan" version in mind.
But it would be nice to have a version that does not load
the text, just in order to see if the WMF servers are fast
enough to provide the same fluidity as in the Google Books
I don't think the text retrieval is the slow step here...
No, but the for loop in the scroll handler makes it a bit slow.
Another problem occurs when you are viewing page p, and when
p-1 is not loaded yet : if you scroll up, at the moment where p-1 is
loaded, the size of its container div increases, and the text you
are viewing (page p) is pushed towards the bottom. On the
Dictionary of National Biography this offset can be quite big,
so you lose track of the text you are viewing.
I don't really know how to solve this ; but it seems to me that
using divs with variable size is part of the problem here too.
I've switched to specifying width rounded to 100s;
however, the API
still gives me one-off images (599 instead of 600 px). I could hack
the API thumbnail URL, though. Better yet, I can probably skip that
step entirely after the first one...
I can see that too (599 instead of 600); but that's not a
problem, because the filename does not change, it is "600px-"
Why load a giant text and then hack around on broken
HTML, when I can
just query each page individually? It's not really slow, at least not
in Google Chrome.
oh, that was in order to display the text without headers, footers
and page breaks ; but I guess it's ok to show headers, because
they are in the scans too. (here I'm not talking about the headers
that you hide with your button ; I mean the other elements
that are in this field : running title, references, etc.)