On Fri, Aug 13, 2010 at 4:27 AM, Lars Aronsson lars@aronsson.se wrote:
Wikipedia, Wikibooks and Wikisource mostly use web 1.0 technology. A very different approach to web browsing was taken when Google Maps was launched in 2005, the poster project for the "web 2.0". You arrive at the map site with a coordinate. From there, you can pan in any direction and new parts of the map (called "tiles") are downloaded by advanced JavaScript and XML (AJAX) calls as you go. Your browser will never hold the entire map. It doesn't matter how big the entire map is, just like it doesn't matter how big the entire Wikipedia website is. The unit of information to fetch is the "tile", just like the web 1.0 unit was the HTML page.
I have doubts about whether this is the right approach for books. Offering the book as plain HTML pages, one for each chapter and also one for the whole book (for printing and searching), seems more useful. Browsers can cope with such long pages just fine, and it preserves all the functionality people are used to: links work as expected, back and forward work as expected, all without extra work on our part. This isn't an option for Google Maps, because
1) They're dealing with way more data. They can't possibly send you a map of the whole world in full detail on every page load. It would be many megabytes even compressed.
2) The page model doesn't fit their needs. Even if they served the whole map, they'd need complicated JavaScript to have it scroll and zoom and so forth as the user expects. This isn't needed for transcribed text, and trying to reimplement all the scrolling/bookmarking/navigation/search/etc. functionality that users are used to for regular web pages would be counterproductive.
Traditional web pages are designed to present formatted text, possibly with some interspersed images, without particular layout precision. Anything in that format is probably best distributed as plain HTML, not some fancy web app thing. Not to mention, the latter is a lot more work to implement.