Thanks for you interest in metavid, I really like what you have done with the openmeetings.org site. I would just mention that you really should use a bit smaller granularity in your transcripts. Big text blocks make it difficult to search the transcripts and quickly jump to what your looking for.
comments inline: I have cc'ed the metavid-l list.
I will try and post this "roadmap" on the wiki sometime soon cuz I realize I have been so busy programing I have not keep people updated with ~the plan~.
George Chriss wrote:
Hi sj,
During the OVC conference you mentioned that you might know devs interested in hacking on MetaVidWiki, which isn't being actively developed at the moment pending [[MW:Media Projects Overview|the big push]]. I've been able to hammer OpenMeetings.org into a workable condition, but now I'm split on finishing a video backlog vs. blogging and site maintenance vs. finding and training volunteers to annotate videos as they're uploaded. To add to this, I'm under increasing demand as a cameraperson. (Yikes!)
We can do a post on metavid.org about the site (we don't get too many visitors and not too may transcript contributers either ;) but maybe a post on wikitech blog too.
There is a lot of work taking place on metavid pieces like the javascript player and the javascript sequencer. This is not necessarily reflected in the extension code as the javascript components have been moved to the trunk. The idea it to as best as possible make every pieces as reusable in a stand alone context as possible, this means a stand alone sequencer / video editor that exports flat files a stand alone skinnable player that can embed ogg video anywhere with a simple script include, a stand alone firefogg encoder to export videos etc.
My next major task is to split the extension up into up into separate "stand alone" extensions "Sequencer", "Timed Text", and "Temporal Media Search". ( Will create a "metavidCore" that will hold the shared functions and then re-factor all the existing metavid code into new cleaner versions as stand alone sub extensions. This way a simple check out of metavidWiki running trunk gives you all that functionality.
Other "sub-extensions" will be: OggHanlder (handles user uploaded ogg video uploads) WikiAtHome extension (helps distribute transcode and flattening operations, we also want to integrate a bittrorent client with video tag support) "Semantic Media Wiki" (handles semantic properties on videos explained in more detail below)
There are several areas where development work would be helpful:
==User-selectable mirrors== I prefer to keep the video streams at reasonably-high quality, using Ogg Theora. The trade-off is of course bandwidth; I'm not really sure how quickly video loads on non-mid-Atlantic connections. Ideally, JavaScript would be able to guess the closest video mirror (e.g., the Internet Archive for visitors from CA). For bandwidth-restricted visitors, an audio-only option would be helpful.
That is a good idea to use the localization feature of Firefox to choose a near by mirror. But its better to handle this at the CDN level ... I think ultimately our thin development effort here would be best spent on integrating a media streaming bitttorent client into firefogg. We could add "read only" bittroent for java cortado users, include http seeding support (which works with byte offsets request so traditional servers can be easy to add to the seeding pool). That way you client always tries to download from the fastest peer be it normal http server near by or someone with a fat pipe in Alaska.
Also, I've noticed a "local seek" feature but haven't had time to look at it in detail.
This basically does a local seek instead of an oggz_chop server request when the media has been loaded up to that point.
==Identi.ca== A good way to increase project visibility would be to integrate Identi.ca dents (also CC-BY). A basic start would be to load dents in a new time-aligned text layer according to a common hashtag, such that visitors could see who dented what during a meeting. I have a PHP script that saves dents for this purpose.
A more-advanced approach would be to also enable visitors to select who to 'subscribe' to while watching a meeting, such that you could follow what your friends dented during a meeting.
Finally, a really fancy approach would be to build an API such that a visitor could dent _from_ OpenMeetings.org while watching a meeting, for later review by others. The dent could be labeled "..from OpenMeetings.org", which would then link to the specific point in the meeting from which the dent originated.
There is some excitement on this topic at Penn State: http://www.colecamplese.com/2009/07/twitter-annotations/
That sounds really cool. The idea to integrate jabber transcrits was also discussed at one point. Aphid may be able to tell you more about that ;) ... I think what we need is a solid API to enable easily building of those types of tools. Unfortunately I built metavid version 1 a while a go and did not structure it as I would have today. Subsequent versions should make it very easy to add layers of transcript data in simple interfaces. Also things like re-capatchca style transcription & translation are key for quick and easy participation.
==Geo-searching== I've started to include GPS coordinates with video uploads, such that visitors could search for meetings by both location and time. Search-by-location is not yet implemented.
I believe there is some geo-location property work done in semantic wiki. If we can integrate with that effort that would be ideal. Pad.ma site also did some neat things with video tagging in that regard.
==Semantic cleanup== I don't really know what I'm doing with semantic metadata (e.g., labeling speakers). Help?
if you just label speakers its useful ... You can see how its useful maybe you have some video talking /about John Doe/ and some videos /of John Doe talking/. And you want to avoid categorization verbosity with chance of inconsistency of categorization. Ie one person says "spoken by john doe" another tags it with just "john doe" its hard to know what is what.
Other things you could semantically tag are things like geo-location (We already put the temporal tag in the wiki temporal annotation title) Only special properties that have semantic meaning like geo-location are ideal for semantic tagging withing the context of timed media attributes.
Everything else should fall a category or "tag" not be a semantic property.
The real value is combining semantic properties about speakers and or you your case potential geo-location, so show me meeting where John Doe is in Texas speaking about "Free Speech" (where Free Speech is just a category, and John Doe is in Texas are semantic properties.
==OLPC XO== I'm curious to know if playback works on XOs, and if not what it would take to make this happen.
probably need to innovate on the interface a little make it work with smaller screen, we already support a hardware accelerated rendering via plugins (something the XO needs as it can't decode Firefox video in software very easily) and would be nice to integrate into other features of the device like accepting uploads captured on the device.
hope this was helpfull, keep us updated on your efforts,
peace, --michael