(only marginally related, but this is to say that I like this idea)
A couple of years ago I contacted a professor at the University of Siena (Tuscany, Italy) which was the head of a project that built a text-to-sign-language converter. The software was converting text in Italian to LIS (Lingua Italiana dei Segni, Italian Sign Language) and was tested also on the public television (see the website below).
The software is called Blue Sign: http://www.bluesign.it/
Basically, since the website said that the project was over, I asked them to re-release the code with a free/libre open license which is a precondition to use it on Wikipedia.
While the idea of a text-to-speech-module for MediaWiki is obvious and plausible, I honestly don't see a benefit in an text to sign-language-output.
Of course it is a nice experiment an certainly helpful for something like a tv shows, where spoken language is converted from sound to text to sign language, so that deaf people could use the same media as others do.
But in our case there already is something deaf people can use as good as anybodyelse: text and images. And while hearing people can benefit from an audio output by using there eyes for something else in the meantime, deaf people can't, because they need there eyes for sign language as much as for reading text?!
Did I miss some aspect? Is there a point in converting something visual into something visual?
// Martin