Am 25.01.2015 um 23:22 schrieb Andrew Lih:
On Sun, Jan 25, 2015 at 7:32 AM, Cristian Consonni
<kikkocristian(a)gmail.com>
wrote:
Il 25/Gen/2015 12:18 "Martin Kraft"
<martin.kraft(a)gmx.de> ha scritto:
Did I miss some aspect? Is there a point in
converting something visual
into something visual?
I have been told that people born deaf find more easy to read things in
sign language. I imagine it like the difference between reading something
written in your mother tongue and reading something in another language you
know.
Yes, I had a deaf student who opened my eyes to this -- he wanted to create
a video site for the deaf that would have signed videos and movies. He had
staffers and volunteers take viral YouTube videos and "sign" them for the
deaf.
My first question was, wouldn't reading subtitles simply solve the problem?
Why do you need to do ASL versions?
He gave me an annoyed look. It's something the deaf community finds
frustrating to explain to outsiders.
There's a reason its called American SIGN LANGUAGE and not "signed English
language." It's a primary language in itself, and reading off the screen is
as inferior an experience as if we read the subtitles with the sound off.
Yes of course: sign language is a far better substitute to spoken
language than subtitles – not at last due to the point, that it comes to
gether with the mimic an gesture of a real person "signing" and
therefore has a kind of accentuation, written text can not provide.
But in the case of Wikipedia articles the thing to be translated is not
spoken language but well phrased text – furthermore a text with a lot of
technical terms. And afaik these terms are hard to encode and decode in
sign language. And while a hearing person can benefit from watching the
pictures, maps, e.g. while listening to somebody reading this article,
deaf people need there eyes to "listen".
Would be interesting, to read what somebody, how realy is deaf, thinks
about this topic?!
// Martin