While human read articles are great they quickly become out of date and are available for only a fraction of our articles.
Why don't we have a "Listen" button beside our read button that when clicked will read the article for the person in question?
There are 37 open source text-to-speech listed here http://www.findbestopensource.com/tagged/text-to-speech. Some of them support up to 50 languages. This of course would require the support of the Wikimedia Foundation.
I guess we could also do it with a gadget initially. Thoughts?
Il 24/Gen/2015 23:21 "James Heilman" jmh649@gmail.com ha scritto:
While human read articles are great they quickly become out of date and
are
available for only a fraction of our articles.
Why don't we have a "Listen" button beside our read button that when clicked will read the article for the person in question?
There are 37 open source text-to-speech listed here http://www.findbestopensource.com/tagged/text-to-speech. Some of them support up to 50 languages. This of course would require the support of
the
Wikimedia Foundation.
I guess we could also do it with a gadget initially. Thoughts?
(only marginally related, but this is to say that I like this idea)
A couple of years ago I contacted a professor at the University of Siena (Tuscany, Italy) which was the head of a project that built a text-to-sign-language converter. The software was converting text in Italian to LIS (Lingua Italiana dei Segni, Italian Sign Language) and was tested also on the public television (see the website below).
The software is called Blue Sign: http://www.bluesign.it/
Basically, since the website said that the project was over, I asked them to re-release the code with a free/libre open license which is a precondition to use it on Wikipedia.
Despite some initial interest in the end the professor told me that it was too complicated to contact every author (actually an handful of people) to obtain their permission, so in the end this resulted in nothing :(.
Cristian
(only marginally related, but this is to say that I like this idea)
A couple of years ago I contacted a professor at the University of Siena (Tuscany, Italy) which was the head of a project that built a text-to-sign-language converter. The software was converting text in Italian to LIS (Lingua Italiana dei Segni, Italian Sign Language) and was tested also on the public television (see the website below).
The software is called Blue Sign: http://www.bluesign.it/
Basically, since the website said that the project was over, I asked them to re-release the code with a free/libre open license which is a precondition to use it on Wikipedia.
While the idea of a text-to-speech-module for MediaWiki is obvious and plausible, I honestly don't see a benefit in an text to sign-language-output.
Of course it is a nice experiment an certainly helpful for something like a tv shows, where spoken language is converted from sound to text to sign language, so that deaf people could use the same media as others do.
But in our case there already is something deaf people can use as good as anybodyelse: text and images. And while hearing people can benefit from an audio output by using there eyes for something else in the meantime, deaf people can't, because they need there eyes for sign language as much as for reading text?!
Did I miss some aspect? Is there a point in converting something visual into something visual?
// Martin
Il 25/Gen/2015 12:18 "Martin Kraft" martin.kraft@gmx.de ha scritto:
Did I miss some aspect? Is there a point in converting something visual
into something visual?
I have been told that people born deaf find more easy to read things in sign language. I imagine it like the difference between reading something written in your mother tongue and reading something in another language you know. Of course the one above was an experiment (and at least a try at getting a University to release their software with a FLOSS license, let's not forget), since I expect some differences from a text converted and one natively written in sign language.
If you look in the Wikimedia Incubator and Meta there are discusions about a Wikipedia in ASL, that is American Sign Language.
http://meta.m.wikimedia.org/wiki/Requests_for_new_languages/Wikipedia_Americ...
C
On Sun, Jan 25, 2015 at 7:32 AM, Cristian Consonni kikkocristian@gmail.com wrote:
Il 25/Gen/2015 12:18 "Martin Kraft" martin.kraft@gmx.de ha scritto:
Did I miss some aspect? Is there a point in converting something visual
into something visual?
I have been told that people born deaf find more easy to read things in sign language. I imagine it like the difference between reading something written in your mother tongue and reading something in another language you know.
Yes, I had a deaf student who opened my eyes to this -- he wanted to create a video site for the deaf that would have signed videos and movies. He had staffers and volunteers take viral YouTube videos and "sign" them for the deaf.
My first question was, wouldn't reading subtitles simply solve the problem? Why do you need to do ASL versions?
He gave me an annoyed look. It's something the deaf community finds frustrating to explain to outsiders.
There's a reason its called American SIGN LANGUAGE and not "signed English language." It's a primary language in itself, and reading off the screen is as inferior an experience as if we read the subtitles with the sound off.
-Andrew
Am 25.01.2015 um 23:22 schrieb Andrew Lih:
On Sun, Jan 25, 2015 at 7:32 AM, Cristian Consonni kikkocristian@gmail.com wrote:
Il 25/Gen/2015 12:18 "Martin Kraft" martin.kraft@gmx.de ha scritto:
Did I miss some aspect? Is there a point in converting something visual
into something visual?
I have been told that people born deaf find more easy to read things in sign language. I imagine it like the difference between reading something written in your mother tongue and reading something in another language you know.
Yes, I had a deaf student who opened my eyes to this -- he wanted to create a video site for the deaf that would have signed videos and movies. He had staffers and volunteers take viral YouTube videos and "sign" them for the deaf.
My first question was, wouldn't reading subtitles simply solve the problem? Why do you need to do ASL versions?
He gave me an annoyed look. It's something the deaf community finds frustrating to explain to outsiders.
There's a reason its called American SIGN LANGUAGE and not "signed English language." It's a primary language in itself, and reading off the screen is as inferior an experience as if we read the subtitles with the sound off.
Yes of course: sign language is a far better substitute to spoken language than subtitles – not at last due to the point, that it comes to gether with the mimic an gesture of a real person "signing" and therefore has a kind of accentuation, written text can not provide.
But in the case of Wikipedia articles the thing to be translated is not spoken language but well phrased text – furthermore a text with a lot of technical terms. And afaik these terms are hard to encode and decode in sign language. And while a hearing person can benefit from watching the pictures, maps, e.g. while listening to somebody reading this article, deaf people need there eyes to "listen".
Would be interesting, to read what somebody, how realy is deaf, thinks about this topic?!
// Martin
For those who are interested, this is the American Sign Language Wikipedia on Incubator:
https://incubator.wikimedia.org/wiki/Wp/ase
Sign languages are indeed real languages, and for example American Sign Language is unrelated to English or even British Sign Language (in fact, it's closest to French Sign Language).
It is certainly true that sign languages have not historically been written in any form by most of their users, and so video should be an important part of any such project, although video is of course not really as wiki-amenable as text is.
The most complete writing system for sign languages, however, a sort of International Phonetic Alphabet, is SignWriting, and its community has been active of Wikimedia projects. There are some technical difficulties with implementing SignWriting on-wiki, including the vast number of potential gestures to be represented, that it is not yet in Unicode, and also that it is written vertically rather than horizontally, but progress has been made on this with a MediaWiki software extension by the SignWriting community.
Thanks, Pharos
On Sun, Jan 25, 2015 at 5:22 PM, Andrew Lih andrew.lih@gmail.com wrote:
On Sun, Jan 25, 2015 at 7:32 AM, Cristian Consonni < kikkocristian@gmail.com> wrote:
Il 25/Gen/2015 12:18 "Martin Kraft" martin.kraft@gmx.de ha scritto:
Did I miss some aspect? Is there a point in converting something visual
into something visual?
I have been told that people born deaf find more easy to read things in sign language. I imagine it like the difference between reading something written in your mother tongue and reading something in another language
you
know.
Yes, I had a deaf student who opened my eyes to this -- he wanted to create a video site for the deaf that would have signed videos and movies. He had staffers and volunteers take viral YouTube videos and "sign" them for the deaf.
My first question was, wouldn't reading subtitles simply solve the problem? Why do you need to do ASL versions?
He gave me an annoyed look. It's something the deaf community finds frustrating to explain to outsiders.
There's a reason its called American SIGN LANGUAGE and not "signed English language." It's a primary language in itself, and reading off the screen is as inferior an experience as if we read the subtitles with the sound off.
-Andrew _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Hi James,
Thanks for this suggestion. May I suggest that you post this idea in IdeaLab? https://meta.wikimedia.org/wiki/Grants:IdeaLab
Siko, cc'd here, might be able to help advise about possible development of this proposal.
Thanks,
Pine On Jan 24, 2015 2:21 PM, "James Heilman" jmh649@gmail.com wrote:
While human read articles are great they quickly become out of date and are available for only a fraction of our articles.
Why don't we have a "Listen" button beside our read button that when clicked will read the article for the person in question?
There are 37 open source text-to-speech listed here http://www.findbestopensource.com/tagged/text-to-speech. Some of them support up to 50 languages. This of course would require the support of the Wikimedia Foundation.
I guess we could also do it with a gadget initially. Thoughts?
-- James Heilman MD, CCFP-EM, Wikipedian
The Wikipedia Open Textbook of Medicine www.opentextbookofmedicine.com _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Yes if there is no opposition to the idea I will post it to the IdeaLab. Thanks Pine :-)
J
On Sat, Jan 24, 2015 at 8:28 PM, Pine W wiki.pine@gmail.com wrote:
Hi James,
Thanks for this suggestion. May I suggest that you post this idea in IdeaLab? https://meta.wikimedia.org/wiki/Grants:IdeaLab
Siko, cc'd here, might be able to help advise about possible development of this proposal.
Thanks,
Pine On Jan 24, 2015 2:21 PM, "James Heilman" jmh649@gmail.com wrote:
While human read articles are great they quickly become out of date and are available for only a fraction of our articles.
Why don't we have a "Listen" button beside our read button that when clicked will read the article for the person in question?
There are 37 open source text-to-speech listed here http://www.findbestopensource.com/tagged/text-to-speech. Some of them support up to 50 languages. This of course would require the support of the Wikimedia Foundation.
I guess we could also do it with a gadget initially. Thoughts?
-- James Heilman MD, CCFP-EM, Wikipedian
The Wikipedia Open Textbook of Medicine www.opentextbookofmedicine.com _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org https://meta.wikimedia.org/wiki/Mailing_lists/GuidelinesWikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Okay have gone ahead and started a proposal here https://meta.wikimedia.org/wiki/Grants:IdeaLab/A_%22Listen%22_Button
J
On Sat, Jan 24, 2015 at 9:08 PM, James Heilman jmh649@gmail.com wrote:
Yes if there is no opposition to the idea I will post it to the IdeaLab. Thanks Pine :-)
J
On Sat, Jan 24, 2015 at 8:28 PM, Pine W wiki.pine@gmail.com wrote:
Hi James,
Thanks for this suggestion. May I suggest that you post this idea in IdeaLab? https://meta.wikimedia.org/wiki/Grants:IdeaLab
Siko, cc'd here, might be able to help advise about possible development of this proposal.
Thanks,
Pine On Jan 24, 2015 2:21 PM, "James Heilman" jmh649@gmail.com wrote:
While human read articles are great they quickly become out of date and are available for only a fraction of our articles.
Why don't we have a "Listen" button beside our read button that when clicked will read the article for the person in question?
There are 37 open source text-to-speech listed here http://www.findbestopensource.com/tagged/text-to-speech. Some of them support up to 50 languages. This of course would require the support of the Wikimedia Foundation.
I guess we could also do it with a gadget initially. Thoughts?
-- James Heilman MD, CCFP-EM, Wikipedian
The Wikipedia Open Textbook of Medicine www.opentextbookofmedicine.com _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org https://meta.wikimedia.org/wiki/Mailing_lists/GuidelinesWikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
-- James Heilman MD, CCFP-EM, Wikipedian
The Wikipedia Open Textbook of Medicine www.opentextbookofmedicine.com
James Heilman wrote:
While human read articles are great they quickly become out of date and are available for only a fraction of our articles.
Yep.
Why don't we have a "Listen" button beside our read button that when clicked will read the article for the person in question?
I think this is an area where it might be difficult to know what's best to do. A few unordered thoughts:
* We need to make sure that it's easy to distinguish between user interface text and other text we want to ignore (noise) from page content text (signal).
* People who really need text-to-speech tools have likely already installed them.
* Text-to-speech may be something that's better handled at the browser or operating system level, rather than at the Web site level.
* Even if text-to-speech isn't built into Wikimedia wikis, per se, we can always provide help/resource pages and guides for our users. For example, explaining how to install free text-to-speech software on common operating systems.
* A middle-ground option might be to explore what we can do to make it easier to programmatically distinguish signal from noise when reading a page. This would include (better) educating developers about accessibility concerns and educating wiki page authors about good and bad practices (do specify alt text, don't use images for text unless necessary, etc.). Plus there's the intersection of these two groups, such as developers implementing simpler user interfaces that allow wiki page authors to more easily add alt text to media. Or developers adding the ability to specify default alt text on a per-file basis, rather than requiring that alt text always be specified when the image is included on a page.
* Another middle-ground option might be trying to find some integration between text-to-speech-capable Web content and browsers. Perhaps similar to https://en.wikipedia.org/wiki/Universal_Edit_Button.
There's also what I would consider a subset of text-to-speech support (word pronunciations) that is tracked at https://phabricator.wikimedia.org/T48610.
MZMcBride
We were discussing it with an association of blind people in Poland - and they told us - that for them the most important thing is clear and logic structure of the website - plain main text, menu/navigation in plain text and descriptions of media in plain text. They are using their own free text-to-speach software to which they are used to. Such software simply reads everything on the screen in the same neutral way. So they don't need any other tools for voice reading - if other websites provide it - they usually do not use it. Maybe in some other languages the situation is different - but it would be better to discuss it with relevant associations before investing time and money for such solutions. Fortunately, Wikipedia actually is quite text-to-speach friendly at the moment.
2015-01-24 23:21 GMT+01:00 James Heilman jmh649@gmail.com:
While human read articles are great they quickly become out of date and are available for only a fraction of our articles.
Why don't we have a "Listen" button beside our read button that when clicked will read the article for the person in question?
There are 37 open source text-to-speech listed here http://www.findbestopensource.com/tagged/text-to-speech. Some of them support up to 50 languages. This of course would require the support of the Wikimedia Foundation.
I guess we could also do it with a gadget initially. Thoughts?
-- James Heilman MD, CCFP-EM, Wikipedian
The Wikipedia Open Textbook of Medicine www.opentextbookofmedicine.com _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
On Sun, Jan 25, 2015 at 4:00 AM, Tomasz Ganicz polimerek@gmail.com wrote:
We were discussing it with an association of blind people in Poland - and they told us - that for them the most important thing is clear and logic structure of the website - plain main text, menu/navigation in plain text and descriptions of media in plain text. They are using their own free text-to-speach software to which they are used to. Such software simply reads everything on the screen in the same neutral way. So they don't need any other tools for voice reading - if other websites provide it - they usually do not use it. Maybe in some other languages the situation is different - but it would be better to discuss it with relevant associations before investing time and money for such solutions. Fortunately, Wikipedia actually is quite text-to-speach friendly at the moment.
Anecdotal to Tomasz's point, there was an editor on IRC the other day in -commons that is Deaf/Blind and considers Wikimedia sites to be, in its current state, one of the friendly- to disability-adaptive software of any website. Mucking that up would be...bad.
What I suppose I'm challenging, James, is this: are our websites playing well with accessibility? What are the specific points of failing? It is subject to the disability, there's no patch to make it all right. What is the path to make it right? How can I help? Where can we document this?
Max Klein and I had a chat with someone from a similar group a couple of years ago, and he reported much the same thing - the actual site structure is pretty good for screenreaders and similar software, or was in early 2013.
(His main suggestion was to look into improved audio "materials" - recordings of what things sounds like, soundscapes, etc. - which we don't really do much with. Andy Mabbett picked up part of this with the Voice Intro Project, which is great, but the rest is still fertile ground...)
Anecdotally, I believe the "spoken Wikipedia" article recordings are mainly used as surrogates for podcast-type use, rather than accessibility purposes. However, if anyone has some firm numbers on this (or even an indication of how much they're used at all...) I'd love to know about it!
Andrew.
On 25 January 2015 at 12:00, Tomasz Ganicz polimerek@gmail.com wrote:
We were discussing it with an association of blind people in Poland - and they told us - that for them the most important thing is clear and logic structure of the website - plain main text, menu/navigation in plain text and descriptions of media in plain text. They are using their own free text-to-speach software to which they are used to. Such software simply reads everything on the screen in the same neutral way. So they don't need any other tools for voice reading - if other websites provide it - they usually do not use it. Maybe in some other languages the situation is different - but it would be better to discuss it with relevant associations before investing time and money for such solutions. Fortunately, Wikipedia actually is quite text-to-speach friendly at the moment.
2015-01-24 23:21 GMT+01:00 James Heilman jmh649@gmail.com:
While human read articles are great they quickly become out of date and are available for only a fraction of our articles.
Why don't we have a "Listen" button beside our read button that when clicked will read the article for the person in question?
There are 37 open source text-to-speech listed here http://www.findbestopensource.com/tagged/text-to-speech. Some of them support up to 50 languages. This of course would require the support of the Wikimedia Foundation.
I guess we could also do it with a gadget initially. Thoughts?
-- James Heilman MD, CCFP-EM, Wikipedian
The Wikipedia Open Textbook of Medicine www.opentextbookofmedicine.com _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
-- Tomek "Polimerek" Ganicz http://pl.wikimedia.org/wiki/User:Polimerek http://www.ganicz.pl/poli/ http://www.cbmm.lodz.pl/work.php?id=29&title=tomasz-ganicz _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
I have used screen readers myself, and often sit on public transport listening to reports and articles I never otherwise find the time to read through. Audio screen reader apps are increasingly useful for mobile and tablet access, it being hard work for someone who has difficulty reading the equivalent of 'license plates for bumble bees' that the small screen offers, especially to someone who is too vain to use their reading glasses on the bus.
Properly up to date "how to" guides for the better screen readers currently available, along with projects to improve how our articles and image pages should be tagged in ways that improve screen reader navigation, would probably be more practical to benefit a wide community of readers rather than having a standard button.
Fae
On 25 January 2015 at 12:35, Andrew Gray andrew.gray@dunelm.org.uk wrote:
Max Klein and I had a chat with someone from a similar group a couple of years ago, and he reported much the same thing - the actual site structure is pretty good for screenreaders and similar software, or was in early 2013.
(His main suggestion was to look into improved audio "materials" - recordings of what things sounds like, soundscapes, etc. - which we don't really do much with. Andy Mabbett picked up part of this with the Voice Intro Project, which is great, but the rest is still fertile ground...)
Anecdotally, I believe the "spoken Wikipedia" article recordings are mainly used as surrogates for podcast-type use, rather than accessibility purposes. However, if anyone has some firm numbers on this (or even an indication of how much they're used at all...) I'd love to know about it!
Andrew.
On 25 January 2015 at 12:00, Tomasz Ganicz polimerek@gmail.com wrote:
We were discussing it with an association of blind people in Poland - and they told us - that for them the most important thing is clear and logic structure of the website - plain main text, menu/navigation in plain text and descriptions of media in plain text. They are using their own free text-to-speach software to which they are used to. Such software simply reads everything on the screen in the same neutral way. So they don't need any other tools for voice reading - if other websites provide it - they usually do not use it. Maybe in some other languages the situation is different - but it would be better to discuss it with relevant associations before investing time and money for such solutions. Fortunately, Wikipedia actually is quite text-to-speach friendly at the moment.
2015-01-24 23:21 GMT+01:00 James Heilman jmh649@gmail.com:
While human read articles are great they quickly become out of date and are available for only a fraction of our articles.
Why don't we have a "Listen" button beside our read button that when clicked will read the article for the person in question?
There are 37 open source text-to-speech listed here http://www.findbestopensource.com/tagged/text-to-speech. Some of them support up to 50 languages. This of course would require the support of the Wikimedia Foundation.
I guess we could also do it with a gadget initially. Thoughts?
-- James Heilman MD, CCFP-EM, Wikipedian
The Wikipedia Open Textbook of Medicine www.opentextbookofmedicine.com _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
-- Tomek "Polimerek" Ganicz http://pl.wikimedia.org/wiki/User:Polimerek http://www.ganicz.pl/poli/ http://www.cbmm.lodz.pl/work.php?id=29&title=tomasz-ganicz _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
--
- Andrew Gray andrew.gray@dunelm.org.uk
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
I agree with this point and I think that this problem is central.
People with defective vision already uses text-to-speech solutions as client application.
So the problem is another: are the pages of Wikimedia projects adapted to be read by TTL client program?
I think that the exaggerated use of templates is the main obstacle to have wikimedia projects compatible with WAI standard (http://www.w3.org/WAI/).
So the question is different: before adapt the pages to WAI, after check them with a TTL client program, and after define if it makes sense to introduce a TTL integrated to the web interface.
Someone with low vision or blind probably cannot find or read the instructions to use the TTL with Wikipedia, he has his own program and is able to use it.
regards
On Sun, Jan 25, 2015 at 1:00 PM, Tomasz Ganicz polimerek@gmail.com wrote:
We were discussing it with an association of blind people in Poland - and they told us - that for them the most important thing is clear and logic structure of the website - plain main text, menu/navigation in plain text and descriptions of media in plain text. They are using their own free text-to-speach software to which they are used to. Such software simply reads everything on the screen in the same neutral way. So they don't need any other tools for voice reading - if other websites provide it - they usually do not use it. Maybe in some other languages the situation is different - but it would be better to discuss it with relevant associations before investing time and money for such solutions. Fortunately, Wikipedia actually is quite text-to-speach friendly at the moment.
On 24.01.2015 23:21, James Heilman wrote:
While human read articles are great they quickly become out of date and are available for only a fraction of our articles.
Why don't we have a "Listen" button beside our read button that when clicked will read the article for the person in question?
There are 37 open source text-to-speech listed here http://www.findbestopensource.com/tagged/text-to-speech. Some of them support up to 50 languages. This of course would require the support of the Wikimedia Foundation.
FYI, we have published yesterday Kiwix for Android 1.92 which proposes for the first time TTS. Here is the release note: https://lists.wikimedia.org/pipermail/offline-l/2015-January/001296.html
We would be happy to propose this feature to the desktop version too, but AFAIK none of the open source text-to-speech solution is good enough for that. This might be a reason to not have one on the online web site too.
Emmanuel
On 24 January 2015 at 22:21, James Heilman jmh649@gmail.com wrote:
Why don't we have a "Listen" button beside our read button that when clicked will read the article for the person in question?
Such functionality belongs in the browser, not the web page. So long as we use valid and accessible markup, the user can use a tool of their preference to have the page read to them, in a voice that suits them.
There are 37 open source text-to-speech listed here http://www.findbestopensource.com/tagged/text-to-speech
Having in the past examined several such tools in my capacity as a professional web manager, I found none which was more suited to the purpose then the scenario I describe above. Indeed, several seems to be money-making scams.
wikimedia-l@lists.wikimedia.org