Thanks Lars.

>Your examples 1 and 2 are the combination of two printed
>editions or variants into one digital product. That process is
>scholarly, text-critical editing, an intellectual exercise. For
>example, if the British and American editions would be found
>to differ not only in spelling but also in content, you would
>have to develop a policy for how to deal with that.

Absolutely correct, and that is exactly what we have done at Hebrew Wikisource. If there is a book that requires special editorial guidelines beyond just simple proofreading, then a page in the Wikisource namespace is created such as [[Wikisource:The Kinematics of Machinery]] where the community collaboratively develops those guidelines.

>The current
>process in Wikisource, as supported by the ProofreadPage
>extension, doesn't address such issues, but only converts one
>printed edition into a digital edition, through scanned images
>and human proofreading. It is a much more limited task, a
>mostly non-intellectual exercise, guided by simple rules.

Also correct to some degree for Wikisources in the larger Latin languages, but not all of Wikisource is this process, not even in English and certainly not in many other languages. There are still plenty of people at en.wikisource who edit and format texts without PP (e.g. based on Gutenburg files or typing themselves), Wikisource translations, etc. "Proofread Page" is a tool for Wikisource, not the definition of the project itself.

Even if many people at English Wikisource are not currently preoccupied with issues 1&2, wouldn't it be healthy to broaden horizons? Imagine Wikisource creating a modern version of the Loeb Classical Library based on collaborative work... It's wonderful to transcribe Mark Twain or the 1911 Britannica from scanned editions, but the full power and possibilities of the Wiki platform are so much more than that!

>It can't link to both. Ideally, ProofreadPage would be remade so
>that each position in the book (a certain chapter, a certain page,
>a certain paragraph) has only one unique address. This is
>an aspect that apparently was not considered when the current
>software and namespace architecture were developed.

Totally agree that would be a very important function. Equally important would be for the function to allow reference and citation with the simplest address possible: The title of the book plus completely flexible labels for the subsections so that links can be written manually in an intuitive way.

I looked at Aubrey's onion layers again and it seems to me they actually might be able to include the kinds of things I mentioned in 1&2, but I'd like to hear from her about that.

As to her wondering whether Wikisource is the place for such things, it really shouldn't be such an issue. A simple analogy is called for: Let's say a Wikipedia article needs to be written about the 2012 US Presidential elections. Writing such an article requires a huge amount of fact finding, decisions about writing and presentation and balance. Those problems are solved when there is good faith collaborative editing, by documenting external sources and scholarship, and by a commitment to presenting all sides of an issue fairly (NPOV). That is why even a highly controversial topic like the US presidential elections can have an article in Wikipedia.

The obstacles in creating a critical or annotated version of a text at Wikisource are far *less* in terms of original research or NPOV than in creating almost any Wikipedia article. The best way to find out is to simply try it!

I looked at DPLA by the way and it looks like a wonderful thing. But I can't imagine it replacing Wikisource in terms of quite a few fundamentals: Open Licensing, full commitment to many languages and cultures with full localization, and creative collaboration not just to document the existing library, but to enhance it and improve it.

Does anyone understand whether the years of discussion of "Wikidata" might have anything to do with #1-2?

Dovi