Some folks around here may have input to provide:
http://blog.gingertech.net/2008/09/23/video-accessibility-for-firefox/
Gregory Maxwell wrote:
Some folks around here may have input to provide:
http://blog.gingertech.net/2008/09/23/video-accessibility-for-firefox/
Can I add my personal encouragement for people in our community to get involved on issues like this? The board plans to devote a fair amount of time in our next meeting to the subject of open standards, file formats, and the like. I'll be giving a bit more information about what's on our agenda shortly, but I was going to call out this topic anyway. I think it will become increasingly significant and we have some challenging questions to answer in how this relates to our mission.
--Michael Snow
2008/9/24 Gregory Maxwell gmaxwell@gmail.com:
Some folks around here may have input to provide:
http://blog.gingertech.net/2008/09/23/video-accessibility-for-firefox/
Wikiveristy would probably be the group most interested. Wikipedia videos tend to be shot silent.
Siliva is a log time collaborator with the metavid project around metadata. I hope to continue to work with her to ensure the wiki extension fits well with the proposed accessibility features of firefox.
MetavidWiki and mv_embed is built around supporting cmml the timed text format Siliva authored. CMML already has a well defined integration with ogg the baseline video format firefox is supporting. http://wiki.xiph.org/index.php/CMML
As mentioned in that blog post she is working on media track description language (a early version of that format that we developed is called ROE) http://wiki.xiph.org/index.php/ROE ROE is currently used in metavid to describe multiple temporal text tracks, multiple media bitrates, multiple codecs and could be used to describe multiple audio tracks. This xml is negotiated by mv_embed to give clients the a set of media that fits with their accessibility requirements currently based on software platform & supported bit rate but could include multiple-language text/audio tracks, audio ~visual description~ tracks for the blind, etc.
==Real World Example ==
MetavidWiki maps these technologies to the mediaWiki. Multiple audio and video mediaWiki resources that differentiate on bitrate, codec or language can be mapped to single "Stream" resource which is described with ROE. Stream resources are also the basis for temporal wiki text data (such as transcripts or subtitles) that could differentiate on language or annotative qualities.
When viewing any stream in metavid say the first 10 min of Kucinich impeachment proceedings: http://tinyurl.com/52veab you notice in the little rss-like caterpillar in the upper right links to the roe xml representation of that stream: http://tinyurl.com/4z5mdb When you embed that clip in a blog for example: http://tinyurl.com/4r5xlg all the metadata remains accessible so you can access the transcripts by clicking on the little "CC" in the lower right of the player. (right now you can only select the English transcript or annotative track)...When you click "download" it exposes all the roe tracks as downloadable. Notice all the data is temporal and only those 10min of metadata and media streams of the full multi hour legal smack down of GW is requested...
== Call to Action ==
This is kind of a limited example since its only English... but hopefully you get the idea... If someone has an ideal video media set with multiple audio tracks & people that could work on multiple transcripts I am open to doing more multi-language friendly development / experimentation / demo ;)
peace, michael
Gregory Maxwell wrote:
Some folks around here may have input to provide:
http://blog.gingertech.net/2008/09/23/video-accessibility-for-firefox/
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Siliva is a log time collaborator with the metavid project around metadata. I hope to continue to work with her to ensure the wiki extension fits well with the proposed accessibility features of firefox.
MetavidWiki and mv_embed is built around supporting cmml the timed text format Siliva authored. CMML already has a well defined integration with ogg the baseline video format firefox is supporting. http://wiki.xiph.org/index.php/CMML
As mentioned in that blog post she is working on media track description language (a early version of that format that we developed is called ROE) http://wiki.xiph.org/index.php/ROE ROE is currently used in metavid to describe multiple temporal text tracks, multiple media bitrates, multiple codecs and could be used to describe multiple audio tracks. This xml is negotiated by mv_embed to give clients the a set of media that fits with their accessibility requirements currently based on software platform & supported bit rate but could include multiple-language text/audio tracks, audio ~visual description~ tracks for the blind, etc.
==Real World Example ==
MetavidWiki maps these technologies to the mediaWiki. Multiple audio and video mediaWiki resources that differentiate on bitrate, codec or language can be mapped to single "Stream" resource which is described with ROE. Stream resources are also the basis for temporal wiki text data (such as transcripts or subtitles) that could differentiate on language or annotative qualities.
When viewing any stream in metavid say the first 10 min of Kucinich impeachment proceedings: http://tinyurl.com/52veab you notice in the little rss-like caterpillar in the upper right links to the roe xml representation of that stream: http://tinyurl.com/4z5mdb When you embed that clip in a blog for example: http://tinyurl.com/4r5xlg all the metadata remains accessible so you can access the transcripts by clicking on the little "CC" in the lower right of the player. (right now you can only select the English transcript or annotative track)...When you click "download" it exposes all the roe tracks as downloadable. Notice all the data is temporal and only those 10min of metadata and media streams of the full multi hour legal smack down of GW is requested...
== Call to Action ==
This is kind of a limited example since its only English... but hopefully you get the idea... If someone has an ideal video media set with multiple audio tracks & people that could work on multiple transcripts I am open to doing more multi-language friendly development / experimentation / demo ;)
peace, michael
Gregory Maxwell wrote:
Some folks around here may have input to provide:
http://blog.gingertech.net/2008/09/23/video-accessibility-for-firefox/
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
wikimedia-l@lists.wikimedia.org