Thanks Lars.
Your examples 1 and 2 are the combination of two printed
editions or variants into one digital product. That process is scholarly, text-critical editing, an intellectual exercise. For example, if the British and American editions would be found to differ not only in spelling but also in content, you would have to develop a policy for how to deal with that.
Absolutely correct, and that is exactly what we have done at Hebrew Wikisource. If there is a book that requires special editorial guidelines beyond just simple proofreading, then a page in the Wikisource namespace is created such as [[Wikisource:The Kinematics of Machinery]] where the community collaboratively develops those guidelines.
The current process in Wikisource, as supported by the ProofreadPage extension, doesn't address such issues, but only converts one printed edition into a digital edition, through scanned images and human proofreading. It is a much more limited task, a mostly non-intellectual exercise, guided by simple rules.
Also correct to some degree for Wikisources in the larger Latin languages, but not all of Wikisource is this process, not even in English and certainly not in many other languages. There are still plenty of people at en.wikisource who edit and format texts without PP (e.g. based on Gutenburg files or typing themselves), Wikisource translations, etc. "Proofread Page" is a tool for Wikisource, not the definition of the project itself.
Even if many people at English Wikisource are not currently preoccupied with issues 1&2, wouldn't it be healthy to broaden horizons? Imagine Wikisource creating a modern version of the Loeb Classical Library based on collaborative work... It's wonderful to transcribe Mark Twain or the 1911 Britannica from scanned editions, but the full power and possibilities of the Wiki platform are so much more than that!
It can't link to both. Ideally, ProofreadPage would be remade so that each position in the book (a certain chapter, a certain page, a certain paragraph) has only one unique address. This is an aspect that apparently was not considered when the current software and namespace architecture were developed.
Totally agree that would be a very important function. Equally important would be for the function to allow reference and citation with the simplest address possible: The title of the book plus completely flexible labels for the subsections so that links can be written manually in an intuitive way.
I looked at Aubrey's onion layers again and it seems to me they actually might be able to include the kinds of things I mentioned in 1&2, but I'd like to hear from her about that.
As to her wondering whether Wikisource is the place for such things, it really shouldn't be such an issue. A simple analogy is called for: Let's say a Wikipedia article needs to be written about the 2012 US Presidential elections. Writing such an article requires a huge amount of fact finding, decisions about writing and presentation and balance. Those problems are solved when there is good faith collaborative editing, by documenting external sources and scholarship, and by a commitment to presenting all sides of an issue fairly (NPOV). That is why even a highly controversial topic like the US presidential elections can have an article in Wikipedia.
The obstacles in creating a critical or annotated version of a text at Wikisource are far *less* in terms of original research or NPOV than in creating almost any Wikipedia article. The best way to find out is to simply try it!
I looked at DPLA by the way and it looks like a wonderful thing. But I can't imagine it replacing Wikisource in terms of quite a few fundamentals: Open Licensing, full commitment to many languages and cultures with full localization, and creative collaboration not just to document the existing library, but to enhance it and improve it.
Does anyone understand whether the years of discussion of "Wikidata" might have anything to do with #1-2?
Dovi
I all, I'm new into the talk, I'm a it.sorce user.
About the interesting Aubrey's onion model, I just discovered basic AJAX tricks, they are great do change the contents of a page using data stored into different pages and/or javascript variables (as wgUserName) and/or preferences or gadgets.
So, I'll think a little about a "AJAX onion", that would allow to save "leaves of onion" in different pages, but consider that any dinamic change of rendered HTML is visible only to user and there's no, or few, trace of them into wiki code.
Just now, servers seem down, so I cant test what happens saving a HTML page whose code has been manipulated via AJAX, I presume, any change will be saved, but I've to test to be sure.
Alex
On Aug 6, 2012, at 8:16 AM, Dovi Jacobs dovijacobs@yahoo.com wrote:
Thanks Lars.
Your examples 1 and 2 are the combination of two printed editions or variants into one digital product. That process is scholarly, text-critical editing, an intellectual exercise. For example, if the British and American editions would be found to differ not only in spelling but also in content, you would have to develop a policy for how to deal with that.
Absolutely correct, and that is exactly what we have done at Hebrew Wikisource. If there is a book that requires special editorial guidelines beyond just simple proofreading, then a page in the Wikisource namespace is created such as [[Wikisource:The Kinematics of Machinery]] where the community collaboratively develops those guidelines.
<snip>
Does anyone understand whether the years of discussion of "Wikidata" might have anything to do with #1-2?
Wikidata is really not being thought about for such a thing. Wikidata is more about data*points* than anything else. Also it will give permanent IDs to wiki pages in a way that won't be broken by moves or renames. So that a (forevermore identifiable) text on Wikisource linking to the author page that is linked to the (forevermore identifiable) biography article on Wikipedia which is also linked to the LOC permanent URL and is also linked the same way to another authority's database, we can say the author of this work is the same as the person in that biography, who is the same as the person listed in that database and in that one too. Even if they all *name* the author with different variations, we will be able to have them all linked through the Wikipedia biography. Or it can be done with a subject, linking through "dc subject" to say that this Wikisouce text contains information about the same concept that that this Wikipedia article contains information about (which is pretty much how they plan to do interwikis from what I understand).
This is done without having to decide on any names for labels. The only label is some string of numbers unique to Wikidata, and this label is not so much defined by any other label as it is correlated to the others. Wikidata will give machine readable labels to information that defined only by its source (X is defined only as "the number given by the 2010 US census" and X is correlated with the population of Iowa), and Wikidata will then correlate everything else which is alike (all the fields within infoboxes in all languages of Wikipedia which are to display numbers that correlate with to the population of Iowa). That is a bit simplified because all of this is done with multiple sources and without picking one source to be the definitive one (the population of Iowa would be correlated with many datapoints, with the more recent being weighted, in order to display a range in the infoboxes instead of the simple example of displaying X).
I barely can grasp much of the Wikidata stuff myself. So fair warning, I might be misleading you all horribly! The neat thing about it is that since it is not semantic, the underlying idea of how to describe the data does seem to be a useful way to *think* about alternative texts. However I do not understand how Wikidata itself would quite fit in with what you are thinking about in 1 & 2. Wikidata sort of ignores the expression level from what I heard. Now, I don't really understand how it *can* ignore the expression level, but I am repeating what I heard. As I have pretty much trained myself to think about Wikisource at the expression level, this is a big roadblock for me. I just don't see how Wikidata would find a handle on the text itself in Wikisource as opposed to how it might handle the metadata about the texts.
I hope you can make sense of some of this, and that I am have not largely misunderstood it all myself.
Birgitte SB
On 2012-08-06 15:16, Dovi Jacobs wrote:
Even if many people at English Wikisource are not currently preoccupied with issues 1&2, wouldn't it be healthy to broaden horizons? Imagine Wikisource creating a modern version of the Loeb Classical Library based on collaborative work... It's wonderful to transcribe Mark Twain or the 1911 Britannica from scanned editions, but the full power and possibilities of the Wiki platform are so much more than that!
Nothing stops you from editing a classical library, but it is a different activity than scanning and proofreading. It is similar to creating your own free translations of classical works, in the sense that it combines the reproduction of an existing book with your own creative/intellectual input. It borders on being a Wikibooks activity rather than a Wikisource activity.
I personally think that simple scanning and proofreading is the activity where we can most easily grow Wikisource. Since the job is mostly non-intellectual, many people can be instructed to help, without creating edit wars. The progress is linear to the number of pages. To scan and proofread 15 volumes of the collected works by an author takes 15 times more man-hours than a single book. Translation or scholarly editing requires more coordination and takes more time for a larger work than the sum of the parts.
Does anyone understand whether the years of discussion of "Wikidata" might have anything to do with #1-2?
I'm afraid that Wikidata can function as a honey-trap. As an abstract idea, it can be perceived as a solution to any problem, but it would take an unspecified number of years to get there. As a concrete software development project during 2012, it will address interwiki (interlanguage) links and nothing more. By honey-trap I mean that if you think Wikidata can solve your problem, you will be trapped waiting for that to happen, while years pass by that you could have used better.
wikisource-l@lists.wikimedia.org