Hi everyone,
I lead platform work
<https://github.com/creativecommons/platform-initiative> at Creative
Commons. As part of that work, we are exploring the potential of a standard
field in EXIF that could make attribution and license info more sticky
across the web. We are currently in the research phase -- talking to major
image hosting platforms (and platforms that read and ingest images) about
what kinds of image metadata they read and retain. Zhou and his engineering
team at Wikimedia directed me to this list as I am seeking feedback from
the Wikimedia community.
Ultimately, we want to make it easier for platforms to display provenance
and license info -- increase the likelihood that when a user lands on an
image, they know who created it and what license to use it under. For
example, images from Wikimedia Commons may get tweeted, but the image
metadata is not retained in tweets. How can we work with platforms to use
the same metadata standard so that info can be retained across them?
Since we are just in the research phase now, I welcome your thoughts on
Wikimedia Commons' and Wikipedia's own uses of image metadata. Specifically:
1. The most common image metadata standards we know about are EXIF and
XMP. Which does Wikimedia primarily read and retain? Are there others that
are more widely used?
2. Which standard does Wikimedia prefer? What would be easiest to
implement? for Wikimedia, but also for the platforms that Wikimedia
interfaces with. Aka, what are the pros and cons of each?
Lastly, welcome any general thoughts about the feasibility and need for
such a project.
Best,
Jane
Jane Park
@janedaily
Creative Commons | Los Angeles
Make a donation to support CC in 2015: http://bit.ly/supportcc2015
Hi all,
Observations on the upload and playback of
https://commons.wikimedia.org/wiki/File:Wikipedia_5_million_articles_milest…
:
1. I'm impressed by how much the video compressed, down to 1.97 Mbps on
WebM 1080P.
2. I had multiple problems when uploading this file in WebM/VP9 format.
Firefox crashed every time I tried to use the upload wizard with chunked
uploads enabled. IE was successful after I changed my chunked upload
setting to 1 simultaneous upload.
3. The upload progress bar was very inaccurate on Firefox. On IE, the bar
seemed to indicate the upload progress of the 5MB chunks rather than the
progress of the whole video.
4. The automated transcoding process to multiple formats on Commons is
nice, although there was a notable decrease in audio quality on the OGV
file that I downloaded.
5. On playback of the video in ENWP by clicking the thumbnail which results
in the video popup window, I think it would be good to make the option to
fullscreen the video more obvious to the viewer inside of the popup window.
Thanks!
Pine
Assuming that this image will be released with a Commons-compatible
license, sooner or later (maybe many years later), there will be a use case
for increasing the Commons file size limit to 194GB:
http://www.dpreview.com/articles/5817599570/astronomers-create-46-gigapixel…
Perhaps in a timeframe in the nearer future, could MediaViewer be tweaked
to download and show only small portions at a time of large images and/or
tiled sets of of images? I think this feature might get a lot of use from
the moment of deployment.
Thanks!
Pine
Hi Multimedians,
I am trying to lightly edit, convert and upload about 20GB of video from
Wikiconference USA. One of the conversion tools suggested to me,
https://wikimedia.meltvideo.com/index.php/main, uses a self-signed security
certificate that is not trusted by my browser for secure Oauth use. Another
tool, the videoconvert tool on Labs, crashes my browser, possibly because
it wants to put all of the video batch in RAM, and 20GB is both more RAM
than I have and far more than the 3GB limit for 32 bit applications. Any
other suggestions about how to convert and upload 20GB of video in a batch?
Pine
I've gone ahead and merged these two changes by TheDJ to MwEmbedSupport and
TimedMediaHandler extensions (they are interdependent):
* https://gerrit.wikimedia.org/r/#/c/172556/
* https://gerrit.wikimedia.org/r/#/c/172421/
These clean up how TMH's JavaScript modules get integrated into MediaWiki's
ResourceLoader, and should both reduce weird timing bugs and make it muuuch
easier to refactor things further.
It's worth checking these on the beta cluster; if all is well they can go
out to Wikimedia sites with the next branch point.
As an intermediary step, some weeks ago we merged this change to mark
parser output requiring video modules, so the standard RL caching _should_
mean everything keeps working on cached pages once the newer revs go live:
* https://gerrit.wikimedia.org/r/#/c/230153/
With all these going out, we'll be able to finalize work on integrating the
VideoJS player framework to replace our old hacked-up version of the
Kaltura player, and generally modernize and vastly improve the playback
experience.
-- brion