On Aug 6, 2012, at 8:16 AM, Dovi Jacobs dovijacobs@yahoo.com wrote:
Thanks Lars.
Your examples 1 and 2 are the combination of two printed editions or variants into one digital product. That process is scholarly, text-critical editing, an intellectual exercise. For example, if the British and American editions would be found to differ not only in spelling but also in content, you would have to develop a policy for how to deal with that.
Absolutely correct, and that is exactly what we have done at Hebrew Wikisource. If there is a book that requires special editorial guidelines beyond just simple proofreading, then a page in the Wikisource namespace is created such as [[Wikisource:The Kinematics of Machinery]] where the community collaboratively develops those guidelines.
<snip>
Does anyone understand whether the years of discussion of "Wikidata" might have anything to do with #1-2?
Wikidata is really not being thought about for such a thing. Wikidata is more about data*points* than anything else. Also it will give permanent IDs to wiki pages in a way that won't be broken by moves or renames. So that a (forevermore identifiable) text on Wikisource linking to the author page that is linked to the (forevermore identifiable) biography article on Wikipedia which is also linked to the LOC permanent URL and is also linked the same way to another authority's database, we can say the author of this work is the same as the person in that biography, who is the same as the person listed in that database and in that one too. Even if they all *name* the author with different variations, we will be able to have them all linked through the Wikipedia biography. Or it can be done with a subject, linking through "dc subject" to say that this Wikisouce text contains information about the same concept that that this Wikipedia article contains information about (which is pretty much how they plan to do interwikis from what I understand).
This is done without having to decide on any names for labels. The only label is some string of numbers unique to Wikidata, and this label is not so much defined by any other label as it is correlated to the others. Wikidata will give machine readable labels to information that defined only by its source (X is defined only as "the number given by the 2010 US census" and X is correlated with the population of Iowa), and Wikidata will then correlate everything else which is alike (all the fields within infoboxes in all languages of Wikipedia which are to display numbers that correlate with to the population of Iowa). That is a bit simplified because all of this is done with multiple sources and without picking one source to be the definitive one (the population of Iowa would be correlated with many datapoints, with the more recent being weighted, in order to display a range in the infoboxes instead of the simple example of displaying X).
I barely can grasp much of the Wikidata stuff myself. So fair warning, I might be misleading you all horribly! The neat thing about it is that since it is not semantic, the underlying idea of how to describe the data does seem to be a useful way to *think* about alternative texts. However I do not understand how Wikidata itself would quite fit in with what you are thinking about in 1 & 2. Wikidata sort of ignores the expression level from what I heard. Now, I don't really understand how it *can* ignore the expression level, but I am repeating what I heard. As I have pretty much trained myself to think about Wikisource at the expression level, this is a big roadblock for me. I just don't see how Wikidata would find a handle on the text itself in Wikisource as opposed to how it might handle the metadata about the texts.
I hope you can make sense of some of this, and that I am have not largely misunderstood it all myself.
Birgitte SB