In the past, we've had a mixture of fixed bitrates and quality-based
settings for producing video transcodes.
Each has its advantages: fixed bitrates are more predictable for watching
while streaming, while fixed quality settings allow for reducing the
bitrate on low-complexity scenes to save bandwidth (and increasing it on
high-complexity scenes to keep quality up!)
Since "download and watch it later" is less of a thing on today's internet
than "stream it right now!", I'd been leaning for a while towards moving
more things to fixed bitrates. However, I'm starting to come down on the
side of a fixed quality setting with a variable bitrate...
Overall variable rate encoding should lead to lower bandwidth usage for
most parts of most files, while still maintaining high quality on scenes
that need it.
The downside is that a high-complexity scene encoded at a higher bitrate
might cause buffering to run out during playback that had been working ok
on earlier scenes at a lower bitrate.
Once we support adaptive streaming (using MPEG-DASH, or something like it)
the system should be able to provide a detailed enough manifest to show
which segments of the file are low-bandwidth and which are high-bandwidth,
so if there's a bandwidth limitation that stops us from viewing one
particular segment at the current resolution, we can bump down and then
bump resolution back up again when the bandwidth usage goes down.
If there's no strong objection, I'm going to tinker with the quality
settings for WebM and Ogg Theora video transcodes to try to find quality
settings I'm happy with that result in reasonable bandwidth averages.
 An MPEG-DASH manifest (.mpd) specifies a target bitrate on each
resolution representation, but the actual segments can be different sizes.
When they're specified as byte ranges of a source file, the exact segment
size is conveniently available!
FYI, this week's presentations, according to the Etherpad, are:
* *Derk-Jan Hartman:* Video.js progress
* *Dmitry Brant*: Wikidata infoboxes in Android app
* *Joaquin Hernandez*: Vicky chat bot
* *Baha*: mobile printing for offline reading
* *Monte*: "smart random" content service endpoint
* *Erik*: Geo boosting search queries
On Thu, May 12, 2016 at 9:17 AM, Adam Baso <abaso(a)wikimedia.org> wrote:
> On Thu, Apr 14, 2016 at 12:13 AM, Adam Baso <abaso(a)wikimedia.org> wrote:
> > Hi all,
> > The next CREDIT showcase will be Thursday, 12-May-2016 at 1800 UTC (1100
> > SF).
> > https://www.mediawiki.org/wiki/CREDIT_showcase
> > For this one we'll use Hangouts on Air for presenters, and the customary
> > YouTube stream for viewers.
> > See you next month!
> > -Adam
> Wikitech-l mailing list
For the last decade we've supported uploading SVG vector images to
MediaWiki, but we serve them as rasterized PNGs to browsers. Recently,
display resolutions are going up and up, but so is concern about
low-bandwidth mobile users.
This means we'd like sharper icons and diagrams on high-density phone
displays, but are leery of adding extra srcset entries with 3x or 4x
size PNGs which could become very large. (In fact currently MobileFrontend
strips even the 1.5x and 2x renderings we have now, making diagrams very
blurry on many mobile devices. See https://phabricator.wikimedia.org/T133496 -
fix in works.)
Here's the base bug for SVG client side rendering:
I've turned it into an "epic" story tracking task and hung some blocking
tasks off it; see those for more details.
TL;DR stop reading here. ;)
One of the basic problems in the past was reliably showing them natively in
or breaking the hamlet caching layer. This is neatly resolved for current
browsers by using the "srcset" attribute -- the same one we use to specify
higher-resolution rasterizations. If instead of PNGs at 1.5x and 2x
density, we specify an SVG at 1x, the SVG will be loaded instead of the
Since all srcset-supporting browsers allow SVG in <img> this should "just
work", and will be more compatible than using the experimental <picture>
element or the classic <object> which deals with events differently. Older
browsers will still see the PNG, and we can tweak the jquery.hidpi srcset
polyfill to test for SVG support to avoid breaking on some older browsers.
This should let us start testing client-side SVG via a beta feature (with
parser cache split on the user pref) at which point we can gather more
real-world feedback on performance and compatibility issues.
Rendering consistency across browser engines is a concern. Supposedly
modern browsers are more consistent than librsvg but we haven't done a
compatibility survey to confirm this or identify problematic constructs.
This is probably worth doing.
Performance is a big question. While clean simple SVGs are often nice and
small and efficient, it's also easy to make a HUGEly detailed SVG that is
much larger than the rasterized PNGs. Or a fairly simple small file may
still render slowly due to use of filters.
So we probably want to provide good tools for our editors and image authors
to help optimize their files. Show the renderings and the bandwidth balance
versus rasterization; maybe provide in-wiki implementation of svgo or other
lossy optimizer tools. Warn about things that are large or render slowly.
Maybe provide a switch to run particular files through rasterization always.
And we'll almost certainly want to strip comments and white space to save
bandwidth on page views, while retaining them all in the source file for
download and reediting.
Feature parity also needs more work. Localized text in SVGs is supported
with our server side rendering but this won't be reliable in the client;
which means we'll want to perform a server side transformation that creates
per-language "thumbnail" SVGs. Fonts for internationalized text are a big
deal, and may require similar transformations if we want to serve them...
Which may mean additional complications and bandwidth usage.
And then there are long term goals of taking more advantage of SVGs dynamic
nature -- making things animated or interactive. That's a much bigger
question and has implementation and security issues!
At Wikimedia Conference in Berlin I met with Felix from Wikimedia Ghana,
who is super interested in getting more immersive media available such as
360-degree panoramic photos ("photo spheres"); I showed him the tool labs
widget using panellum to do WebGL spherical photo viewing -- see
https://phabricator.wikimedia.org/T70719#2204864 -- and he was very excited
to see that it's something we could probably work out how to integrate in
the nearish term.
That got me thinking more generally about new media types (video, panos,
stereoscopic photos/videos/panos, 3D models, interactive diagrams, etc) and
how we can extend them to support annotations and linking in a way that
could create immersive visual experiences with the same kind of rich
information and interlinking that Wikipedia is famous for in the world of
Ladies and gentlemen, I give you: "*Epic saga: immersive hypermedia (Myst
I would be real interested to hear y'all's ideas on medium to long term
feasibility and desirability of this sort of system, and what we can pull
more directly into the short term.
For instance I would love to get the panoramic / spherical viewers
integrated in MMV, which is much easier than figuring out how to do
clickable annotations in 3d environment. ;)
Medium term, I would also love to see us look at the annotation system
that's on Commons done in site JS, and see if we can build a
future-extensible system that's more integrated into the wiki and can be
used in MMV.
Longer term, I think it'll just be nice to have these kinds of long-term
goals to work towards.
Thoughts? Ideas? Am I crazy, or just crazy enough? ;)