Daniel, I agree, but isn't that what Multilingual Text requires? A language code?
I.e. how does the current model plan to solve that?
I assume most of it is hidden behind mini-wizards like "Create a new lexeme", which actually make sure the multitext language and the language property are consistently set. In that case I can see this work.
On Mon, Apr 10, 2017 at 10:11 AM Daniel Kinzler daniel.kinzler@wikimedia.de wrote:
Am 10.04.2017 um 18:56 schrieb Gerard Meijssen:
Hoi, The standard for the identification of a language should suffice.
I know no standard that would be sufficient for our use case.
For instance, we not only need identifiers for German, Swiss and Austrian German. We also need identifiers for German German before and after the spelling reform of 1901, and before and ofter the spelling reform of 1996. We will also need identifiers for the "language" of mathematical notation. And for various variants of ancient languages: not just Sumerian, but Sumerian from different regions and periods.
The only system I know that gives us that flexibility is Wikidata. For interoperability, we should provide a standard language code (aka subtag). But a language code alone is not going to be sufficient to distinguish the different variants we will need.
-- Daniel Kinzler Principal Platform Engineer
Wikimedia Deutschland Gesellschaft zur Förderung Freien Wissens e.V.
Wikidata mailing list Wikidata@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata