Mark Williamson wrote:
I meant that it's unsuitable for a dictionary.
Any good dictionary will tell you how a Chinese character is pronounced in any given variety.
Chinese characters wouldn't be so difficult to learn for somebody who only knew a signed language (in much of the developing world, deaf people don't learn to read and write spoken languages), because they can be associated with signs on a morphemic basis.
However, a spoken language written alphabetically used for glosses will require that any given deaf person learn said spoken language before being able to read a text.
This also means that a deaf man in Boston who wants to know the ASL equivalent of the English word "boarish" will not get an answer by looking in a dictionary and finding the gloss "boarish" in the field for an ASL translation.
Mark
Hoi First of all, there are three different types of languages. There are the written languages, the spoken languages and the signed languages. They are quite different. A deaf person does not learn a spoken language, he learns a written language. The difference is quite crucial. How a written language relates to a spoken language is something that a deaf person does not apreciate. This relation is often tenuous at best. It makes our written language as abstract as Chines characters for deaf people.
Your assumption that deaf people have to learn written languages is correct. In many ways it is an essential skill. It is awkward that a dictionary needs to have another language to make it accesible. To a large extend this is just that, awkward. It is not a reason not to include sign languages in Ultimate Wiktionary as UW intends to have all words in all languages. When you have a look at a recent version of the datadesign, you will find that I included Wolfgang Georgdorf's methodology of providing metadata about signs in there as well.
So when someone want to find ASL for boarish, and he is literate, he will be able to find it. Being illiterate makes using a computer practically impossible. The idea of having a user interface for ASL will be a dream, it will not be feasible in UW mark I. The best way it might work is that you have the words in the UI and when you click it, you get a signed instruction.
This discussion is not really relevant to the WIKITECH mailinglist so I crosspost it to the WIKTIONARY mailing list where it is of interest.
Thanks, GerardM
On 27/08/05, Delirium delirium@hackish.org wrote:
Mark Williamson wrote:
- With word-for-word glosses in a spoken language. For ASL or BSL
this is usually English; for InSL it may be Hindi or another Indian language or English; for Chinese SL it will probably be Chinese. While this is suitable in most cases for writing whole sentences and recording syntax and grammar, it gives no specific information about what a sign looks like and thus is completely unsuitable.
Why does this make it completely unsuitable? A large proportion of Chinese characters give no specific information about how they are pronounced---and indeed are pronounced radically different by Mandarin, Cantonese, and Japanese speakers---but that doesn't seem to have led to them being deemed unsuitable for use in a written language. At the very least, using Chinese characters to write Chinese SL is no worse than using Kanji to write Japanese.
-Mark
wiktionary-l@lists.wikimedia.org