One reason to identify a language is to exclude it from being considered
part of another language. What follows is that a single string can and
should be identified as not being the base language for an article.
Functionally there are great reasons why you want to do this including
providing webfonts for languages like Batak, Burmese etc.
A Wikidata approach makes consequently no sense at all.
On 24 April 2013 19:44, Marc A. Pelletier <marc(a)uberbox.org> wrote:
On 04/23/2013 11:29 PM, Erik Moeller wrote:
(Keeping in mind that some
pages would be multilingual and would need to be identified as such.)
If so, this seems like a major architectural undertaking that should
only be taken on as a partnership between domain experts (site and
platform architecture, language engineering, Visual Editor/Parsoid,
My two currency subunits:
A wikidata-like approach seems like the only sensical approach to the
problem IMO; that is, the concept of a 'page (read: data item)' should
be language neutral and branch off in a set of "real" pages with their
own title and language information.
"metapage" X would have an enumeration of representations in different
languages, each with their own localized title(s) and contents. This
way, given any such page, the actual information needed to switch
between languages and handle language-specific presentation is
immediately available. Categories would need no magical handling, that
category Y is named "Images of dogs" in English and "Imágenes de
in Spanish is just part of the normal structure.
Add to this a simple user preference of language ordering for when
"their" language is unavailable, and you have a good framework.
All that'd be left is... UI. :-)
Wikitech-l mailing list