On Fri, August 7, 2009 17:03, Michael Dale wrote:
Thanks for you interest in metavid, I really like what you have done with the openmeetings.org site.
Thanks! Please send critical feedback as I am striving for rapid growth.
I would just mention that you really should use a bit smaller granularity in your transcripts. Big text blocks make it difficult to search the transcripts and quickly jump to what your looking for.
I have been using logical breakpoints to delineate transcripts ("paragraph chunks") as opposed to sentence-level chunks on the basis that it makes reading-while-watching easier. This does make search more difficult, so if there is a way to use a hybrid approach (e.g., short transcripts grouped into paragraphs when displayed), I'd be all for that.
A temporary hack might be to have Special:MediaSearch check keyword placement relative to the start of the transcript and make the jump accordingly.
comments inline: I have cc'ed the metavid-l list.
Hi list!
I will try and post this "roadmap" on the wiki sometime soon cuz I realize I have been so busy programing I have not keep people updated with ~the plan~.
George Chriss wrote:
Hi sj,
During the OVC conference you mentioned that you might know devs interested in hacking on MetaVidWiki, which isn't being actively developed at the moment pending [[MW:Media Projects Overview|the big push]]. I've been able to hammer OpenMeetings.org into a workable condition, but now I'm split on finishing a video backlog vs. blogging and site maintenance vs. finding and training volunteers to annotate videos as they're uploaded. To add to this, I'm under increasing demand as a cameraperson. (Yikes!)
We can do a post on metavid.org about the site (we don't get too many visitors and not too may transcript contributers either ;) but maybe a post on wikitech blog too.
We're slated for a posting on the OVA blog (Ben Moskowitz CC'd), and I have one or two other blogs in mind too.
There is a lot of work taking place on metavid pieces like the javascript player and the javascript sequencer. This is not necessarily reflected in the extension code as the javascript components have been moved to the trunk. The idea it to as best as possible make every pieces as reusable in a stand alone context as possible, this means a stand alone sequencer / video editor that exports flat files a stand alone skinnable player that can embed ogg video anywhere with a simple script include, a stand alone firefogg encoder to export videos etc.
Sounds good. I'm excited about what would be possible with the sequencer as Theora editing is an uphill process. Also, Monty mentioned "editing Ogg" (with one packet(?) per frame, vs. "standard"/"export" Ogg) might be implemented to make Ogg super-seekable in editing software.
My next major task is to split the extension up into up into separate "stand alone" extensions "Sequencer", "Timed Text", and "Temporal Media Search". ( Will create a "metavidCore" that will hold the shared functions and then re-factor all the existing metavid code into new cleaner versions as stand alone sub extensions. This way a simple check out of metavidWiki running trunk gives you all that functionality.
Other "sub-extensions" will be: OggHanlder (handles user uploaded ogg video uploads) WikiAtHome extension (helps distribute transcode and flattening operations, we also want to integrate a bittrorent client with video tag support) "Semantic Media Wiki" (handles semantic properties on videos explained in more detail below)
There are several areas where development work would be helpful:
==User-selectable mirrors== I prefer to keep the video streams at reasonably-high quality, using Ogg Theora. The trade-off is of course bandwidth; I'm not really sure how quickly video loads on non-mid-Atlantic connections. Ideally, JavaScript would be able to guess the closest video mirror (e.g., the Internet Archive for visitors from CA). For bandwidth-restricted visitors, an audio-only option would be helpful.
That is a good idea to use the localization feature of Firefox to choose a near by mirror. But its better to handle this at the CDN level ...
Yes, of course---I hadn't thought this far ahead. Are there any suggestions for CDNs that have oggz-chop CGI and reasonably good privacy standards? (I'm aiming for library-grade privacy standards.) DIY maybe, with WMF and others? Cortado applet signing would also be helpful here.
I
think ultimately our thin development effort here would be best spent on integrating a media streaming bitttorent client into firefogg. We could add "read only" bittroent for java cortado users, include http seeding support (which works with byte offsets request so traditional servers can be easy to add to the seeding pool). That way you client always tries to download from the fastest peer be it normal http server near by or someone with a fat pipe in Alaska.
I do use web-seeded, trackerless, Archive-hosted torrents to ensure long-term media availability; however, I think the CDN approach would be more sensible on a couple of counts:
-Network efficiency (i.e., avoiding traffic over last-mile lines) -Avoiding a start-up delay before the video reaches a "ready to play" state -Long tail traffic makes BitTorrent critical mass harder to achieve
There's a stigma associated with BitTorrent in corporate environments, so I suggest making BitTorrent disabled-by-default or to offer two versions of Firefogg. I like the idea, though.
Also, I've noticed a "local seek" feature but haven't had time to look at it in detail.
This basically does a local seek instead of an oggz_chop server request when the media has been loaded up to that point.
Ah, I see. Is there a way to load media from the local filesystem?
==Identi.ca== A good way to increase project visibility would be to integrate Identi.ca dents (also CC-BY). A basic start would be to load dents in a new time-aligned text layer according to a common hashtag, such that visitors could see who dented what during a meeting. I have a PHP script that saves dents for this purpose.
A more-advanced approach would be to also enable visitors to select who to 'subscribe' to while watching a meeting, such that you could follow what your friends dented during a meeting.
Finally, a really fancy approach would be to build an API such that a visitor could dent _from_ OpenMeetings.org while watching a meeting, for later review by others. The dent could be labeled "..from OpenMeetings.org", which would then link to the specific point in the meeting from which the dent originated.
There is some excitement on this topic at Penn State: http://www.colecamplese.com/2009/07/twitter-annotations/
That sounds really cool. The idea to integrate jabber transcrits was also discussed at one point. Aphid may be able to tell you more about that ;) ... I think what we need is a solid API to enable easily building of those types of tools. Unfortunately I built metavid version 1 a while a go and did not structure it as I would have today. Subsequent versions should make it very easy to add layers of transcript data in simple interfaces. Also things like re-capatchca style transcription & translation are key for quick and easy participation.
/me readies a pitch to reCAPTCHA to see if this would be possible, as audio challenges from "open meetings" might match the ORC work in terms of being a good cause. Need to find a way not to break things...
==Geo-searching== I've started to include GPS coordinates with video uploads, such that visitors could search for meetings by both location and time. Search-by-location is not yet implemented.
I believe there is some geo-location property work done in semantic wiki. If we can integrate with that effort that would be ideal. Pad.ma site also did some neat things with video tagging in that regard.
Yes, I have a few sessions on semantic geo-location from Wiki-Conference 2009, to be published as soon as I am able.
==Semantic cleanup== I don't really know what I'm doing with semantic metadata (e.g., labeling speakers). Help?
if you just label speakers its useful ... You can see how its useful maybe you have some video talking /about John Doe/ and some videos /of John Doe talking/. And you want to avoid categorization verbosity with chance of inconsistency of categorization. Ie one person says "spoken by john doe" another tags it with just "john doe" its hard to know what is what.
Other things you could semantically tag are things like geo-location (We already put the temporal tag in the wiki temporal annotation title) Only special properties that have semantic meaning like geo-location are ideal for semantic tagging withing the context of timed media attributes.
Everything else should fall a category or "tag" not be a semantic property.
The real value is combining semantic properties about speakers and or you your case potential geo-location, so show me meeting where John Doe is in Texas speaking about "Free Speech" (where Free Speech is just a category, and John Doe is in Texas are semantic properties.
Clip labels weren't sticking for some reason, but I'll take another crack. Should be a simple fix.
==OLPC XO== I'm curious to know if playback works on XOs, and if not what it would take to make this happen.
probably need to innovate on the interface a little make it work with smaller screen, we already support a hardware accelerated rendering via plugins (something the XO needs as it can't decode Firefox video in software very easily) and would be nice to integrate into other features of the device like accepting uploads captured on the device.
Yes, I also have an impromptu video-on-XO session from OVC, also to be published as soon as I am able.
hope this was helpfull, keep us updated on your efforts,
Very much so.
Cheers, George
peace, --michael
MetaVid-l mailing list MetaVid-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/metavid-l