Hi Gilles,
Thank you for having a look into my proposal and giving your valuable
feedback. The proposal I sent was a brief outline of my project. I kept it
short as I have been advised to do so. I have given answers to some of your
queries with respect to the .x3d format below.
The X3D format provides an XML code as an source code. This XML script can
be used to extract information of the file. There are functions like
getScreenshot() which return a .png file of the 3D image. There are also
functions to manipulate the camera views (given in the API documentation of
x3d). Using these views we can set a standard view and use the
getScreenshot() function to get a .png file.
How getScreenshot() actually works is :
" getScreenshot()
Returns: URL to image
Returns a Base64 encoded data URI containing png image consisting of the
current rendering. The browser will interpret this as a PNG image and
display it. A list of browsers which support data URI can be found here.
The following example illustrates the usage:
var url = ...runtime.getScreenshot();
var img = document.createElement("img");
img.src = url;
...
"
This is taken from the documentation of x3d.
Now, to texture. Every x3d file has a texture map. This map contains
Background, ImageTexture, MovieTexture, MultiTexture, and PixelTexture. In
all cases, texture maps are defined by 2D images that contain an array of
colour values describing the texture. If we want detailled texture we can
rebuild the texture using this map. The only problem I see here is that
this process might not be feasible in terms of memory and speed. Maybe it
will work out. Otherwise I believe even the screenshot will retain some
texture and we can use that as well. The good thing here is that the
texture map provided by x3d has a format in terms of PNG pixels which shall
make compatibility easy.
Some more details on texture maps :
Texture maps are defined in a 2D coordinate system (s, t) that ranges from
[0.0, 1.0] in both directions. The bottom edge of the image corresponds to
the S-axis of the texture map, and left edge of the image corresponds to
the T-axis of the texture map. The lower-left pixel of the image
corresponds to s=0, t=0, and the top-right pixel of the image corresponds
to s=1, t=1. Texture maps may be viewed as two dimensional colour functions
that, given an (s, t) coordinate, return a colour value colour(s, t).
I have not studied COLLADA in much depth and will get back to you on it in
some time. For the moment the advantages of using x3d seem to be :
-> First of all x3d seems to be pretty well documented and shall be easy to
go in even further depth.
-> All x3d files have a source code in xml which I have no problems with.
->Further, since we can get all contents of the file in xml it shall be
easy to validate it as well.
-> Functions like getScreenShot() should make our process comparitively
easy.
->Also, x3d files have meta statements. These meta statements provide
information of the x3d scene as a whole. We can directly take this
information from these meta statements.
Another thing that I found which might be relevant could be -
"*Blender export* :
Converting Blender scenes into X3DOM webpages is pretty simple: Blender
already supports direct X3D export even so there are some issues. Blender
Version 2.4 seems to export some more nodes (e.g. lights), but in general
it works. We will explore this more in the future, but an exported X3D file
( may afterwards be easily be integrated into an HTML webpage using X3DOM.
Just finish your model in Blender and export to x3d file format.
There are two ways to get your X3D data into the HTML page, and both
include no coding at all:
1. Two-file solution, link the X3D-file
Just use a very simple X3D scene in your HTML file, which, more or less,
only includes an <inline> node. This nodes references and asynchronously
downloads the X3D file. Therefore you need a real web server (e.g. Apache)
running while using <inline> nodes in X3DOM.
2. One-file solution, embed the X3D data inside of the HTML page
You can embed the X3D file by hand in a (X)HTML page, but this may include
some hand-tweaking. Better use the aopt-converter described in Generic 3D
data conversion. This can be done offline with a single command:
aopt -i horse.x3d -N horse.html (horse.x3d is an x3d file being converted
into an html one)
You also may use the converter online. Just open horse.x3d with your
favorite text editor and paste it into the source text field. Choose XML
encoding (X3D) as input type and HTML5 encoded webpage as output type and
press the Convert encoding button."
Even though I am more sure about the former method, I am mentioning this
here as there was some discussion on the bug related to blender files. If
this works then using x3d we can incorporate blender files as well.
I would be highly grateful if you could provide me with further insights
into this project.
Regards,
Umang
Hi all,
I am Umang Sharma from IIITH (International Institute of Information
Technology - Hyderabad), India and am interested in working for one the
projects proposed by the community i.e. "New media types supported in
Commons" as a GSOC candidate. I have drafted a proposal for the same.
This project has been a long standing community request and it would be
great if I were given the opportunity to work on this and make some
progress. I have planned a basic outline on how to approach the problem. I
have decided to provide a solution for either x3d or collada file
formats(required for representing computer graphics). I will work on the
other if time is there during my project. However, I would like feedback on
which file format is more in demand currently. Also, if anyone has any
recommendations for efficient raster image generations do tell. Please go
through my proposal and tell me how can I improve it and make it up to the
expectations of the community.
Link : https://www.mediawiki.org/wiki/User:Umang13/Gsoc14
Regards,
Umang
Dear Admin@ Multimedia Mailing List,
I am a researcher from Aalto University, Finland. We are currently conducting an online survey on photo sharing practices and motivations. Audience of Multimedia Mailing List are highly suitable participants for our study. Would it be possible to post the following text on your mailing list?
Thank you and have a great day.
Best regards,
Aqdas
___________________________
Researchers from Aalto University, Finland are investigating the practices and motivations behind photo sharing activity. We would highly appreciate if you can participate to our study by filling in the online survey that will take approximately 5 minutes to complete. Results from the study will be published in peer reviewed journal or conference.
If you have any further questions relating to this research, please feel free to contact the researcher.
https://www.webropolsurveys.com/S/9923BBCAB850688B.par
Thank you in advance.
Best regards,
Aqdas Malik
aqdas.malik(a)aalto.fi<mailto:aqdas.malik@aalto.fi>
Researcher, Strategic Usability Group
Aalto University
+ multimedia
On Tue, Mar 18, 2014 at 11:44 AM, Aaron Arcos <aarcos.wiki(a)gmail.com> wrote:
> On Tue, Mar 18, 2014 at 8:47 AM, Gergo Tisza <gtisza(a)wikimedia.org> wrote:
>
>> Gilles, thanks for the great analysis!
>>
>>
> +1, good stuff !, I like where the discussion is heading, in particular
> the idea
> of comparing MV against the "current" implementation via a test, that's the
> way to go.
>
>
I'm merging the discussion back with the list. If you're on the list and
have been following this discussion, the last 6 emails quoted below are
probably interesting.
> Have you considered comparing the Media Viewer image load to the current
> image load on Commons?
That's what I meant when I talked about comparing Media Viewer to the file
page (basically the experience without media viewer). Yes, I think we can
and should compare to Commons specifically, as files hosted on Commons are
the majority on our large hosted wikis, as opposed to files uploaded
directly to the current wiki.
Either way, I think it would be useful to have some form of image load
> metric for each of our key pilot sites
>
Aren't all pilot sites hosted in the same place? I think tracking them on
the detailed network performance level, as we already do, is sufficient to
spot issues in the ops realm where one site would get unusually slower
compared to the others. We can check to be certain whether or not Media
Viewer vs File page gives us different results between the sites, but if
they're all hosted in the same place, they should give us the same results.
If I'm wrong about hosting location, then yes, we should definitely track
them.
If you could make a practical proposal towards that simple goal, that would
> be wonderful.
My practical proposal is that right now we:
- improve the current limn graphs to cover useful information that I dug up
manually in https://www.mediawiki.org/wiki/Multimedia/Performance_Analysis
- write the Media Viewer vs File page/Commons test
- look at the results of this test and define *simple* acceptance criteria
for now and the future, similar to the ones I've suggested earlier
- re-do a full analysis of the results (limn graphs + vs. test) and check
if there's anything preventing us from launching
And if time allows:
- add the thumb dimensions to the markup in core, so that we can display
the blurred thumbnail 200-300ms sooner on average (helps with perception)
- investigate the API calls that are considerably slower than others to see
if there's anything we can improve on that front
All the other ideas regarding measuring speed perception are worth keeping
in mind, but I don't think they're worth doing right now, given the short
timeframe we're looking at. I think it's something that we should keep up
our sleeve and use if we launch and the feedback is really negative with a
lot of people complaining that Media Viewer feels slower than the old way.
By that point we'll have done everything we can in terms of improving real
performance, so studying the effect on users of our speed perception
strategies is pretty much the only thing we'll have left as actionable. I
see it as a last resort, because if people complain, it's more likely that
the cause is the real performance, not the tricks we've used to make it
appear faster. Tricks can be counter-effective, though, that's why I'm not
ruling this part out, just postponing it.
On Tue, Mar 18, 2014 at 5:27 PM, Fabrice Florin <fflorin(a)wikimedia.org>wrote:
> Thanks, Gilles, this is really, really thoughtful!
>
> Have you considered comparing the Media Viewer image load to the current
> image load on Commons? That would at least give us a sense of how much
> longer it takes with Media Viewer than before?
>
> Either way, I think it would be useful to have some form of image load
> metric for each of our key pilot sites, so we can get a sense of how long
> it takes for people to view the images.
>
> If you could make a practical proposal towards that simple goal, that
> would be wonderful.
>
> Cheers,
>
>
> Fabrice
>
>
> On Mar 18, 2014, at 8:08 AM, Gilles Dubuc <gilles(a)wikimedia.org> wrote:
>
> having a measure that helps us to get an idea of whether efforts in that
>> front are working is useful
>>
>
> "Perceived speed" by definition, is subjective, it can't be measured
> automatically. Efforts to measure figures like the amount of time the
> blurred image takes to show up are in my opinion, a waste of time, because
> once we have this figure the only effect it'll have is the team saying "oh,
> cool" at the result, and then there won't be anything actionable. Because
> Media Viewer's task isn't to display blurred images, the measure of success
> for the project is time taken for the actual image showing up. Even if you
> measure very precisely how long a spinner or a placeholder is on the screen
> for, it won't answer the real question which is: does Media Viewer *feel*faster with those visual tricks/preloading? And I think that's an entirely
> separate debate than performance, one that would need to be answered by
> questionnaires and user testing.
>
> Do we include also the access to the file (that some users do to view it
>> in more details) as part of the things we compare with Media Viewer?
>>
>
> Do you mean the full resolution image? I think a more pertinent question
> than comparing timing would be to measure how many times people open the
> full resolution image with and without Media Viewer. This is more of a
> general visit statistic to follow. I imagine that the question you want to
> answer here is whether people feel less the need to open the full
> resolution image, thanks to the extra detail visible on the screen with
> media viewer. I'm not sure that we can measure that, though, because media
> viewer and the file page can both access the same full resolution image.
> I.e. a CDN hit on the full res doesn't mean the person opened the full
> resolution specifically, it could be that media viewer displayed it because
> the person's screen resolution required it, or that someone gave them a
> link to the full resolution image, etc. It's an interesting question, but
> at a glance measuring it seems quite difficult.
>
>
>> Having an idea of how much such feature is used could provide us an
>> estimate of the time saved by the user
>
>
> We can't pretend to measure "time saved by the user" with javascript
> measurement. It just isn't measurable because it depends on the user's
> workflow. Someone could rightfully argue that Media Viewer "takes more
> time" because in their old workflow they're used to opening tons of tabs in
> the background, and they spend enough time on each tab that interests them
> that they can very quickly close and dismiss the ones they're not
> interested in, and every time they see another tab it's already finished
> loading. The advantage of opening tabs here in terms of time saved is
> simply because that user's manual image preloading technique is a lot more
> aggressive than media viewer's. So, no, we can't measure the "time saved by
> a user" in javascript, publishing a figure that calls itself that would be
> misleading and would probably backfire on us. To measure "time saved by the
> user" we'd need a user testing session where we compare how long people
> take to go through two very similar image galleries with media viewer vs
> without it. Then, with a big enough sample, you can argue that one is a
> time saver compared to the other one.
>
>
> On Tue, Mar 18, 2014 at 3:26 PM, Pau Giner <pginer(a)wikimedia.org> wrote:
>
>> On the topic of measuring detailed things from an end-user perspective
>>> (time it takes to display the blurred thumb, to hit next, etc.) I think
>>> that they're too complex to be worth doing at the moment, and we have
>>> nothing to compare them against.
>>
>>
>> I agree that "perceived performance" as the name suggest is subjective,
>> but having a measure that helps us to get an idea of whether efforts in
>> that front are working is useful. The time the user waits until seeing some
>> kind of progress (e.g, getting the blurry image) is useful in that context.
>>
>> For example, the amount of time it takes to display the blurred image has
>>> no equivalent on the file page, so that figure can't really be used to
>>> determine success.
>>
>>
>> For the file page it is true that the perceived performance will
>> approximate the real one.
>> Do we include also the access to the file (that some users do to view it
>> in more details) as part of the things we compare with Media Viewer? If
>> that is the case, big files<https://upload.wikimedia.org/wikipedia/commons/4/45/Empire_State_Building_p…> that
>> are progressively loaded by browsers would be more comparable, and it will
>> be interesting to check both times (showing something vs. showing the
>> complete image) in MediaViewer and outside of it.
>>
>> Another interesting time to take into account, is the time saved through
>> navigation controls. Having an idea of how much such feature is used could
>> provide us an estimate of the time saved by the user (which currently has
>> to go back and forth dealing with additional page loads or tab switching in
>> the browser).
>>
>>
>> Having said all that, I totally understand that measuring the real
>> performance is considered more priority than the perceived performance, but
>> measures to estimate the later should not be overlooked.
>>
>> Pau
>>
>>
>>
>> On Tue, Mar 18, 2014 at 12:15 PM, Gilles Dubuc <gilles(a)wikimedia.org>wrote:
>>
>>> Goal: The Media Viewer would be considered accepted if it can display
>>>> 1-2 Mb images in less than 3 seconds at least 80% of the time, during the
>>>> course of a week.
>>>>
>>>
>>> The issue with that goal is that it's performance is almost entirely out
>>> of our hands. As seen in my preliminary analysis of the data (
>>> https://www.mediawiki.org/wiki/Multimedia/Performance_Analysis ), some
>>> places like Russia seem to have terrible performance compared to average
>>> internet speed in those countries and there's nothing we (multimedia team)
>>> can do to change that. Chances are, if media viewer can't display a 1-2MB
>>> image in less than 3 seconds 80% of the time, neither can the file page or
>>> any wiki page with an image of the same size on it. Because the issue is
>>> most likely the connectivity between users in those countries and our
>>> servers, not the technique used to deliver it (as part of the pageload or
>>> loaded by JS).
>>>
>>> I think that the main issue with the goal options I've seen so far is
>>> that they focus on general performance of Media Viewer as an isolated
>>> entity. The network performance tracking we've set up is good to identify
>>> issues on our end. For example an API call that might be too slow and that
>>> maybe we can optimize, or the fact that we could patch mediawiki core by
>>> adding thumb dimensions to display the thumb sooner. Also, it helps us keep
>>> track of any ops issues that might be affecting the product we're
>>> responsible for (media viewer) on an ongoing basis. They help making sure
>>> that we're doing the most we can to make things fast.
>>>
>>> What these network performance stats aren't good for, though, is to
>>> determine whether media viewer is successful as a product. Because the
>>> performance of our servers, our CDNs and our networking infrastructure are
>>> all bundled up in the same figure, indistinguishable from one another. It
>>> doesn't tell us if Media Viewer is good in the context of an infrastructure
>>> that won't change overnight.
>>>
>>> I think the only measure of success we can do in our realm is how
>>> opening an image in media viewer compares to opening a file page or not.
>>> We're not tracking that yet. The only way we could do that on the user's
>>> end I can think of is to load a file page in an invisible iframe and
>>> measure how long it takes for it to load, and better yet how long it takes
>>> for the image on that file page to load too. And compare that to an image
>>> load in the media viewer. However it's really challenging to measure that,
>>> because we can't stop the user from navigating images in the media viewer
>>> while we attempt to measure a file page in an iframe, and the navigating
>>> they do would trigger requests that use up bandwidth, etc. Thus, I don't
>>> think we can get pertinent figures collected directly from users that will
>>> tell us if media viewer is doing a good job in terms of performance or not,
>>> because there would be too much noise in the data collection.
>>>
>>> I think that automated testing is the way to go, we should package this
>>> performance measurement (media viewer vs file page) as a series of browser
>>> tests and check the figures that way. Even better if they can run on
>>> something like cloudbees where there would be some latency between where
>>> the tests run and our servers. Now, there are variables at play when making
>>> a media viewer/file page comparison:
>>>
>>> - Is the JS already cached? As Gergo mentioned, the JS being uncached
>>> will happen the first time and then every 30 days-ish or whenever we update
>>> media viewer (once a week at most, usually). I think we should measure both
>>> variants (with JS cached and with JS not cached), to assess how bad the
>>> effect of cold cache is. There are a number of ways we could address this
>>> issue, some more aggressive than others in terms of bandwidth (eg. preload
>>> the JS when the mouse cursor gets near a thumbnail, preload the JS after
>>> the pageload is done, etc.). This is worth measuring because it's
>>> actionable. The reason why we haven't taken those measures yet is that
>>> they're a balancing act (wasting people's bandwidth vs providing a faster
>>> experience).
>>>
>>> - What screen resolution are we testing against? The bigger the
>>> resolution, the bigger the image, the slower the image load. I couldn't
>>> find any figures about the average desktop screen resolution of people
>>> visiting our wikis. Maybe someone knows where to get that figure if we have
>>> it? On that front we could either test the performance of the most common
>>> resolutions, or test the performance of the average resolution.
>>>
>>> - Varnish cache hit/varnish cache miss. We know that's a big slowdown
>>> when it happens, and we know that this won't get solved for another few
>>> months. That variable, however, also applies to file pages. The image on
>>> the file page is a thumb too and it can be a varnish miss as well. We don't
>>> see it often because it stops as soon as one person (usually the author)
>>> visits the file page. Media viewer just increases the probability of
>>> hitting a varnish cache miss because we have a few buckets instead of a
>>> single size/bucket for the file page. I think this is an isolated problem
>>> and actually one that needs more serious math to measure the effect of. Why
>>> more serious math? Because for one, it depends on the distribution of
>>> desktop resolutions among our visitors, compared to the buckets we've
>>> picked. If for example a given bucket size covers 80% of our visitors, then
>>> in 80% of the cases, the effect of varnish misses is exactly the same as
>>> the file page. We also have to consider if it's worth spending time
>>> studying this issue at all, knowing that a few months from now ops will
>>> have the disk capacity that will allow us to pregenerate the bucket sizes
>>> we need. And knowing that there's literally nothing we can do about it at
>>> this point, besides reducing the amount of buckets to reduce the likelihood
>>> of being the first person to hit one. My recommendation for that issue is
>>> that we use the technical performance data we're collecting already to
>>> determine what percentage of image views are affected by it over time on
>>> wikis that have signed up for the launch. Then we'll get an idea of how bad
>>> it really is on a wiki where everyone has media viewer (because, by network
>>> effect, the more people there are, the less likely you will be to be the
>>> first person to use media viewer on a given file). But it's not worth
>>> obsessing over right now, because the low traffic of the tests sites makes
>>> it happen to us a whole lot more than it would in a context where every
>>> visitor has media viewer.
>>>
>>> So, once we've settled what we do with the above variables, we can come
>>> up with acceptance criteria for media viewer's performance, which could
>>> look like:
>>> - with a cold JS cache, on an average desktop resolution, with a varnish
>>> hit, media viewer shows the image in at most 100% of the time it takes for
>>> the file page to do the same
>>> - with a warm JS cache, on an average desktop resolution, with a varnish
>>> hit, media viewer shows the image in at most 75% of the time it takes for
>>> the file page to do the same
>>> - with a warm JS cache, on an large desktop resolution, with a varnish
>>> hit, media viewer shows the image in at most 120% of the time it takes for
>>> the file page to do the same
>>>
>>> An added advantage to making this measurement automated is that it can
>>> be baked in as a test failure/success criteria. So if suddenly we make a
>>> code change that mistakenly makes the experience slower than our criteria,
>>> the team would be notified automatically.
>>>
>>> On the topic of measuring detailed things from an end-user perspective
>>> (time it takes to display the blurred thumb, to hit next, etc.) I think
>>> that they're too complex to be worth doing at the moment, and we have
>>> nothing to compare them against. For example, the amount of time it takes
>>> to display the blurred image has no equivalent on the file page, so that
>>> figure can't really be used to determine success. Graphs of those figures
>>> expressed in user-centric terms would be easier to understand to outsiders,
>>> but in terms of troubleshooting technical issues they're not better than
>>> the data we're already collecting. They're worse, in fact, because any
>>> number of things could happen on the users' computers between action A and
>>> action B (browsers freezing tabs comes to mind) that would quickly render a
>>> lot of those virtual user-centric figures meaningless. I think we should
>>> focus on what makes the core experience better, not spending time building
>>> entertaining graphs.
>>>
>>>
>>> On Tue, Mar 18, 2014 at 1:00 AM, Fabrice Florin <fflorin(a)wikimedia.org>wrote:
>>>
>>>> Hi Multimedia team (keeping it to a short list so we can reach closure
>>>> soon on this important topic):
>>>>
>>>> Did you have any comments on my email of Friday on the Image Load
>>>> Study? (see below) That proposal was based on last week's conversations
>>>> with you guys.
>>>>
>>>> If this general direction works for you, I propose the following main
>>>> acceptance criteria from a performance standpoint:
>>>>
>>>> Goal: The Media Viewer would be considered accepted if it can display
>>>> 1-2 Mb images in less than 3 seconds at least 80% of the time, during the
>>>> course of a week.
>>>>
>>>> Verification: This goal could be verified with a histogram showing
>>>> total load events in a week for 1-2 Mb images, with these deciles: number
>>>> of image load events in under 1 second? in 1-2 seconds? in 2-3 seconds? ...
>>>> and so on, up to 10 seconds or more. If 80% of these events take place in
>>>> the first three deciles, we would have reached our goal.
>>>>
>>>> Would this seem like a reasonable basic measure of success for us in
>>>> coming weeks? Or would you recommend another goal?
>>>>
>>>> If we had more time, we could track a variety of other goals, but I am
>>>> looking for a single metric we can focus on and actually measure in time
>>>> for launch. If we want more granular criteria, I proposed other possible
>>>> performance targets by image size in card #149.
>>>>
>>>> On the assumption that this is a good direction to pursue, I propose we
>>>> focus on the following 4 high priority cards for our next steps:
>>>>
>>>> #149 Define acceptance performance criteria for the media viewer (see
>>>> above, let's edit as needed to reflect our team goal)
>>>> https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/149
>>>>
>>>> #364 Instrumentation for timing of image load, lightbox UI load
>>>> https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/364
>>>>
>>>> #292 Histograms and decile charts for performance
>>>> https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/292
>>>>
>>>> #198 Analyze Image Load Data with Dashboards
>>>> https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/198/
>>>>
>>>> I also created this Metrics Tasks Wall, based on Gergo's Epic
>>>> Story:#359, to make it easier to track all these tickets:
>>>>
>>>> https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards?favorit…
>>>>
>>>> Given our primary goal proposed above, I would recommend that we
>>>> prioritize #364 and #292 over #198 -- and postpone the bandwith-related
>>>> tickets, as recommended in the P.S. below.
>>>>
>>>> Please let me know what you think and what you recommend for our next
>>>> steps.
>>>>
>>>> Thanks,
>>>>
>>>>
>>>> Fabrice
>>>>
>>>>
>>>> P.S.: For now, I recommend that we de-emphasize these bandwith-related
>>>> metrics, since they are unlikely to happen in our time-frame:
>>>>
>>>> #361 Collect bandwidth stats
>>>> https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/361
>>>>
>>>> #340 More Image Load Dashboards by Bandwidth
>>>> https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/340
>>>>
>>>>
>>>> On Mar 14, 2014, at 4:13 PM, Fabrice Florin <fflorin(a)wikimedia.org>
>>>> wrote:
>>>>
>>>> Hi everyone,
>>>>
>>>> We would appreciate your advice on our upcoming research study of image
>>>> load times on Media Viewer.
>>>>
>>>> Here are proposed goals, questions and outcomes for this study. They
>>>> are presented for discussion purposes, not as a prescriptive requirement -
>>>> and will be adjusted based on your feedback.
>>>>
>>>>
>>>> *I. Goals*
>>>> The goal of this study is to determine whether or not Media Viewer is
>>>> loading images fast enough for the majority of our users in most common
>>>> situations.
>>>>
>>>> As a typical user of the Media Viewer, I want images to load quickly,
>>>> in just a few seconds, so I don't have to wait a long time to see them.
>>>>
>>>> Here are our recommended performance targets for image load times by
>>>> connection speed, to match user expectations on the Web:
>>>> * 1-2 seconds for a medium-size image on a fast connection
>>>> * 2-3 seconds for the same image on a medium connection
>>>> * 5-8 seconds for the same image a slow connection
>>>>
>>>> If tracking connection speeds is too hard in our time-frame, we could
>>>> base our performance targets on image size instead. For example:
>>>> * 1-2 seconds for a small-size image on a medium connection
>>>> * 2-3 seconds for medium-size image on the same connection
>>>> * 5-8 seconds for large-size image on the same connection
>>>>
>>>> Definitions:
>>>> * Image load time = the number of seconds from when you click on a
>>>> thumbnail to when you see the full image
>>>> * Image size: large = over 2Mb, medium = 1 to 2Mb, small = under 1Mb
>>>> * Connection speed: fast = over 256 Kbs, medium = 64 to 256 Kbs, slow =
>>>> under 64 Kbs
>>>>
>>>> The above numbers are for discussion purposes, and can be adjusted
>>>> based on your feedback.
>>>>
>>>>
>>>> *II. Questions*
>>>> Here are the main research questions we propose to answer about image
>>>> load performance.
>>>>
>>>> *1. How long does it take for an image to load for the conditions
>>>> below?*
>>>> (image load = total time from thumbnail click to full image display)
>>>>
>>>> a. by image size:
>>>> load times for large images? medium images? small images?
>>>>
>>>> b. by web site:
>>>> load times for mediawiki.org? commons? enwiki? frwiki? huwiki?
>>>> other sites?
>>>>
>>>> c. by connection speed: (optional)
>>>> load times for fast connections? medium connections?
>>>> small connections? (this may not be feasible in our time frame)
>>>>
>>>> d. by daypart: (optional)
>>>> load times for morning? afternoon? evening? night time? (to show if
>>>> performance slows during peak hours)
>>>>
>>>> This question could be answered by storing the timestamp for thumbnail
>>>> clicks, as well as the timestamp for the full image display, then log the
>>>> difference.
>>>>
>>>> We would then prepare different bar graphs for each condition set
>>>> above, with categories on the vertical axis, and number of seconds on the
>>>> horizontal axis. The graphs could be based on data from the last 7 days.
>>>>
>>>>
>>>> *2. How often does the image load time exceed our performance targets
>>>> above?*
>>>>
>>>> a. by load time in a day:
>>>> number of images that load in under 1 second? in 1-2 seconds? in
>>>> 2-3 seconds? ... and so on, up to 10 seconds or more
>>>>
>>>> b. by load time in a week:
>>>> number of images that load in under 1 second? in 1-2 seconds? in
>>>> 2-3 seconds? ... and so on, up to 10 seconds or more
>>>>
>>>> This question could be answered by preparing different histograms,
>>>> with number of images on the vertical axis, and number of seconds on the
>>>> horizontal axis (deciles).
>>>>
>>>>
>>>> *III. Outcomes*
>>>> To answer these questions, we plan to collect data during our upcoming
>>>> pilots on different sites in April.
>>>>
>>>> Based on these pilot results, we will need to make decisions about the
>>>> wider deployments planned for May.
>>>>
>>>> Here are possible outcomes from this study:
>>>>
>>>> Outcome 1: Favorable - e.g.: 80% of images load quickly
>>>> Action: Go ahead with current release plan to deploy Media Viewer
>>>> everywhere by default.
>>>>
>>>> Scenario 2: Neutral - e.g.: 50% of images load quickly
>>>> Action: Go ahead with current release plan, but deploy Media Viewer as
>>>> an opt-in feature on wikis that don't want it by default
>>>>
>>>> Scenario 3: Unfavorable - e.g.: 20% of images load quickly
>>>> Action: Revisit release plan: consider making this opt-in everywhere --
>>>> or work on faster image load solutions.
>>>>
>>>>
>>>> We would be grateful for your comments on this, so we can refine our
>>>> plans before we start this study next week. Please let us know which
>>>> metrics above seem most important, given that we only have a few developer
>>>> days to collect and analyze a few key metrics in coming weeks, to determine
>>>> if we are meeting our objectives. Some related links are included below,
>>>> for your convenience.
>>>>
>>>> To end on a positive note, we just deployed yesterday a new version of
>>>> Media Viewer that is much faster, thanks to all the fine work from our
>>>> development team. This morning, I looked at a variety on 'non-popular'
>>>> images on enwiki today, and the Media Viewer experience was quite good
>>>> overall. Most images load within the 2 second maximum which we
>>>> recommend for a 'fast' connection -- and this was a home wifi connection. I
>>>> realize this is completely anecdotal, and not supported by hard data, so we
>>>> can't make any decisions about it. But it makes me hopeful that we are
>>>> getting close to our objectives. Even compared to large commercial sites
>>>> like Flickr, we hold up pretty well on this computer. :)
>>>>
>>>> Thanks for your interest in this project.
>>>>
>>>> All the best,
>>>>
>>>>
>>>> Fabrice
>>>>
>>>>
>>>> _______________________________
>>>>
>>>>
>>>> *USEFUL LINKS*
>>>>
>>>> * Media Viewer Release Plan:
>>>> https://www.mediawiki.org/wiki/Multimedia/Media_Viewer/Release_Plan
>>>>
>>>> * First Media Viewer Metrics:
>>>> http://multimedia-metrics.wmflabs.org/dashboards/mmv Metrics
>>>>
>>>> * Media Viewer Test Page:
>>>> https://commons.wikimedia.org/wiki/Commons:Lightbox_demo
>>>>
>>>> * Metrics Tasks under consideration (Mingle):
>>>>
>>>> https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards?favorit…
>>>>
>>>> * Next Development Cycle (Mingle):
>>>> http://ur1.ca/gtvvr
>>>>
>>>> * About Media Viewer:
>>>> https://www.mediawiki.org/wiki/Multimedia/About_Media_Viewer
>>>>
>>>>
>>>> _______________________________
>>>>
>>>> Fabrice Florin
>>>> Product Manager, Multimedia
>>>> Wikimedia Foundation
>>>>
>>>> http://en.wikipedia.org/wiki/User:Fabrice_Florin_(WMF)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________
>>>>
>>>> Fabrice Florin
>>>> Product Manager
>>>> Wikimedia Foundation
>>>>
>>>> http://en.wikipedia.org/wiki/User:Fabrice_Florin_(WMF)
>>>>
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Pau Giner
>> Interaction Designer
>> Wikimedia Foundation
>>
>
>
> _______________________________
>
> Fabrice Florin
> Product Manager
> Wikimedia Foundation
>
> http://en.wikipedia.org/wiki/User:Fabrice_Florin_(WMF)
>
>
>
>
Greetings everyone!
As the Multimedia team finishes up this cycle of Media Viewer development,
we'd appreciate your feedback on some of its final features, such as the
proposed link to Media Viewer from Commons file description pages.
Currently, viewing media on Commons either shows an image standardized on
the file description page, or you can click on the image to view it in full
resolution. Offering a link to open the file using Media Viewer will allow
for a richer media experience, as the viewing size is increased, while
still providing useful information, as well as prominent tools for file
sharing and reuse.
We propose a "View expanded" link below the image on Commons pages (see
mockup thumbnail to the right). This will enable users to open the image in
Media Viewer, without making it the standard viewer for file pages.
Additionally, if the user clicks "share this file" and it opens in Media
Viewer and then the user exits out, they cannot return to Media Viewer
without such a link.
You can find more information and comments from the designers on the Mingle
card #199[1] and you can view this mockup[2] of what the button will look
like below the file.
Thank you for your time and your feedback, please leave it on the "About
Media Viewer" talk page[3].
1. < https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/199>
2. <
https://commons.wikimedia.org/wiki/File:Media_viewer_access_from_Commons_de…>
3. <
https://www.mediawiki.org/wiki/Talk:Multimedia/About_Media_Viewer#Link_to_M…>
--
Keegan Peterzell
Community Liaison, Product
Wikimedia Foundation
Hi everyone,
We would appreciate your advice on our upcoming research study of image load times on Media Viewer.
Here are proposed goals, questions and outcomes for this study. They are presented for discussion purposes, not as a prescriptive requirement - and will be adjusted based on your feedback.
I. Goals
The goal of this study is to determine whether or not Media Viewer is loading images fast enough for the majority of our users in most common situations.
As a typical user of the Media Viewer, I want images to load quickly, in just a few seconds, so I don't have to wait a long time to see them.
Here are our recommended performance targets for image load times by connection speed, to match user expectations on the Web:
* 1-2 seconds for a medium-size image on a fast connection
* 2-3 seconds for the same image on a medium connection
* 5-8 seconds for the same image a slow connection
If tracking connection speeds is too hard in our time-frame, we could base our performance targets on image size instead. For example:
* 1-2 seconds for a small-size image on a medium connection
* 2-3 seconds for medium-size image on the same connection
* 5-8 seconds for large-size image on the same connection
Definitions:
* Image load time = the number of seconds from when you click on a thumbnail to when you see the full image
* Image size: large = over 2Mb, medium = 1 to 2Mb, small = under 1Mb
* Connection speed: fast = over 256 Kbs, medium = 64 to 256 Kbs, slow = under 64 Kbs
The above numbers are for discussion purposes, and can be adjusted based on your feedback.
II. Questions
Here are the main research questions we propose to answer about image load performance.
1. How long does it take for an image to load for the conditions below?
(image load = total time from thumbnail click to full image display)
a. by image size:
load times for large images? medium images? small images?
b. by web site:
load times for mediawiki.org? commons? enwiki? frwiki? huwiki? other sites?
c. by connection speed: (optional)
load times for fast connections? medium connections? small connections? (this may not be feasible in our time frame)
d. by daypart: (optional)
load times for morning? afternoon? evening? night time? (to show if performance slows during peak hours)
This question could be answered by storing the timestamp for thumbnail clicks, as well as the timestamp for the full image display, then log the difference.
We would then prepare different bar graphs for each condition set above, with categories on the vertical axis, and number of seconds on the horizontal axis. The graphs could be based on data from the last 7 days.
2. How often does the image load time exceed our performance targets above?
a. by load time in a day:
number of images that load in under 1 second? in 1-2 seconds? in 2-3 seconds? … and so on, up to 10 seconds or more
b. by load time in a week:
number of images that load in under 1 second? in 1-2 seconds? in 2-3 seconds? … and so on, up to 10 seconds or more
This question could be answered by preparing different histograms, with number of images on the vertical axis, and number of seconds on the horizontal axis (deciles).
III. Outcomes
To answer these questions, we plan to collect data during our upcoming pilots on different sites in April.
Based on these pilot results, we will need to make decisions about the wider deployments planned for May.
Here are possible outcomes from this study:
Outcome 1: Favorable - e.g.: 80% of images load quickly
Action: Go ahead with current release plan to deploy Media Viewer everywhere by default.
Scenario 2: Neutral - e.g.: 50% of images load quickly
Action: Go ahead with current release plan, but deploy Media Viewer as an opt-in feature on wikis that don’t want it by default
Scenario 3: Unfavorable - e.g.: 20% of images load quickly
Action: Revisit release plan: consider making this opt-in everywhere — or work on faster image load solutions.
We would be grateful for your comments on this, so we can refine our plans before we start this study next week. Please let us know which metrics above seem most important, given that we only have a few developer days to collect and analyze a few key metrics in coming weeks, to determine if we are meeting our objectives. Some related links are included below, for your convenience.
To end on a positive note, we just deployed yesterday a new version of Media Viewer that is much faster, thanks to all the fine work from our development team. This morning, I looked at a variety on 'non-popular' images on enwiki today, and the Media Viewer experience was quite good overall. Most images load within the 2 second maximum which we recommend for a ‘fast’ connection — and this was a home wifi connection. I realize this is completely anecdotal, and not supported by hard data, so we can’t make any decisions about it. But it makes me hopeful that we are getting close to our objectives. Even compared to large commercial sites like Flickr, we hold up pretty well on this computer. :)
Thanks for your interest in this project.
All the best,
Fabrice
_______________________________
USEFUL LINKS
* Media Viewer Release Plan:
https://www.mediawiki.org/wiki/Multimedia/Media_Viewer/Release_Plan
* First Media Viewer Metrics:
http://multimedia-metrics.wmflabs.org/dashboards/mmv Metrics
* Media Viewer Test Page:
https://commons.wikimedia.org/wiki/Commons:Lightbox_demo
* Metrics Tasks under consideration (Mingle):
https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards?favorit…
* Next Development Cycle (Mingle):
http://ur1.ca/gtvvr
* About Media Viewer:
https://www.mediawiki.org/wiki/Multimedia/About_Media_Viewer
_______________________________
Fabrice Florin
Product Manager, Multimedia
Wikimedia Foundation
http://en.wikipedia.org/wiki/User:Fabrice_Florin_(WMF)
Hi folks,
Here are our notes from yesterday’s multimedia team meeting, when we planned our next development cycles.
We discussed our goals for the next 6-week cycle, as well as for subsequent cycles. We agreed to focus on wrapping up feature development and gradually releasing Media Viewer v0.2 in April/May, so we can move on to other important tasks on our roadmap.
To stay focused on this goal, we decided to devote the next 6-week cycle to completing final features for the first pilot release of Media Viewer v0.2 in the next cycle.
We will then switch our focus to Upload Wizard and Structured Data in the subsequent cycle, only fixing Media Viewer bugs as needed. We expect this new work to continue to be our main focus through the summer.
To make rapid progress on these important multimedia priorities, we may need to push back the next version of Media Viewer 0.3 to later in the year — or only do incremental work on it in the next quarters.
Our current goals for the next few cycles are outlined below — and you can read more on our meeting notepad:
http://etherpad.wikimedia.org/p/multimedia-planning-meeting-2014-03-12
We welcome your comments, questions and suggestions about these proposed goals. We expect to adjust them periodically, based on this feedback and other factors.
Please respond by email for now. You can also post comments about Media Viewer on this discussion page:
https://www.mediawiki.org/wiki/Talk:Multimedia/About_Media_Viewer
Thanks as always for your constructive feedback, which is invaluable to us!
Best regards,
Fabrice
on behalf of the Multimedia team
____________________________________________
MULTIMEDIA TEAM GOALS - NEXT CYCLES
Here are our proposed goals for the multimedia team’s next development cycles. (We define a cycle as a 6-week iteration, which includes 6 weekly sprints; it can also include one or more product releases).
We will be discussing these goals with community and team members in coming days. We expect to adjust them periodically, based on this feedback and other factors.
1. Goals for Current Cycle
* Current Cycle started February 6, ends March 19 (second half of fiscal Q3).
* Media Viewer v0.2 - Develop key features for pilot release in April
* Analyze image load metrics, define acceptance criteria
* Bugs & Technical Debt - Ongoing
* See Mingle wall of tasks completed this cycle: http://ur1.ca/gtyvh
2. Goals for Next Cycle
* Starts March 20, ends April 30 (first half of fiscal Q4).
* Media Viewer v0.2 - Complete key features and launch tasks for April release
* Release Media Viewer v0.2 in April/May (MW.org > small wikis > large wikis > all wikis)
* Fix Media Viewer v0.2 Bugs (address most urgent community concerns)
* Upload Wizard - Fix large bugs, collect metrics, unit tests, plan next steps
* Bugs & Technical Debt - Ongoing
* See Mingle wall: http://ur1.ca/gtvvr
3. Goals for Subsequent Cycle
* Starts May 1, ends June 11 (second half of fiscal Q4)
* Upload Wizard - improve UI design, incremental code refactoring, new features
* Fix Media Viewer v0.2 Bugs (focus on most important community concerns)
* Structured Data - start planning and first prototypes, with Wikidata and community
* File Notifications - Your file was used
* See Mingle wall: http://ur1.ca/gtyt3
4. Goals for 'Next-Subsequent' Cycle
* Starts June 12, ends July 23 (first half of fiscal Q1)
* Upload Wizard continued improvements and bug fixes, as needed
* Structured Data - start implementation on Commons, with Wikidata and community
* Fix Media Viewer v0.2 Bugs (focus on most important issues)
As time allows, we will consider incremental work on these goals throughout next year:
* Media Viewer v0.3 - Develop new features (zoom, A/V support, plugins)
* File Feedback - Develop positive feedback tool (Thanks, Watch or Favorite)
Useful links
* Multimedia Project Hub
https://www.mediawiki.org/wiki/Multimedia
* About Media Viewer
https://www.mediawiki.org/wiki/Multimedia/About_Media_Viewer
* Media Viewer Release Plan
https://www.mediawiki.org/wiki/Multimedia/Media_Viewer/Release_Plan
_______________________________
Fabrice Florin
Product Manager, Multimedia
Wikimedia Foundation
https://www.mediawiki.org/wiki/User:Fabrice_Florin_(WMF)
Hello everyone,
Thanks for your good comments over the weekend about the Opt-out feature for Media Viewer!
Based on your feedback, we now plan to provide an ‘enabled' user preference (option A), as described on our discussion page. (1)
Today, we would like your guidance on another Media Viewer feature: the link to Commons (or other file repository). (2)
Many people have raised concerns on our discussion pages that the current link is not prominent enough to help power users go to Commons — or to make new users aware of what Commons is. Right now, that link to the Commons file info page is located below the fold, at the top of the right column in the meta-data panel (3); the current label is “Learn more on Wikimedia Commons."
As recommended by many in our onwiki, email and IRC discussions, we have been exploring different ways to make this link to Commons more prominent, as outlined in this Mingle card #270 (4).
This link is trying to solve the needs of two very different user groups:
• Advanced users need a quick link to edit the Commons description page and perform other related editorial tasks for that image.
• New users want to know more about the image, and also need information on what Commons is and why they should go there.
To address these issues for each user group, we are considering different design solutions, prepared by our designer Pau Giner:
A. Simple 'Edit’ button: (5)
Provide an ‘Edit’ tool above the fold, so that advanced users can quickly go to the Commons description page to edit it. Restrict this to logged-in users only?
• Pros: gives editors a much-needed edit tool, in a compact format that is easy to understand (pencil icon), making it easier for them to do their work
• Cons: readers could get confused by this tool, which takes them to a completely different site (so we may want to not show it to them at all).
B. 'Edit’ button with tooltip: (6)
Provide the same ‘Edit’ tool above the fold, but show a tooltip on hover, to explain to new users what it does. Show the edit tool to everybody.
• Pros: gives editors the same useful, compact tool, to help them do their editing work quickly
• Cons: readers should like the tooltip, but it may annoy some editors (don’t show the tooltip to advanced users?)
C. 'More details on Commons’: (7)
Provide a call to action inviting new users to check more details on Commons, explaining what it is and how to get there. Shown below the fold, after key details.
• Pros: Clarifies what Commons is and why users might want to go there: to get more details and share free media. Larger panel makes it easier to find.
• Cons: Below the fold position means many users will not see it. Consider using it in combination with Options A or B above?
We would appreciate your advice on which of the options above would be most helpful for both new users and power users. Note that we may want to use some of these design ideas in combination (e.g.: Option B + C), to offer different solutions to meet the specific needs of each user group.
Please respond via email on this list — or add your comments on this discussion page:
https://www.mediawiki.org/wiki/Talk:Multimedia/About_Media_Viewer#Feedback_…
Later this week, we will ask your advice about adding a button on Commons to open an image in Media Viewer.
Thanks as always for your constructive advice — and speak with you soon!
Fabrice
(1) Discussion of Opt-out Feature for Media Viewer:
https://www.mediawiki.org/wiki/Talk:Multimedia/About_Media_Viewer#Feedback_…
(2) Discussion about Links to Commons:
https://www.mediawiki.org/wiki/Talk:Multimedia/About_Media_Viewer#Feedback_…
(3) Media Viewer Meta-data panel:
https://upload.wikimedia.org/wikipedia/commons/4/4b/Media_Viewer_Screenshot…
(4) Prominent Links to Commons File Pages in Media Viewer - Mingle card #270:
https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/270
(5) Mockup A: Edit Button - Helps power users quickly edit Commons page
https://commons.wikimedia.org/wiki/File:Media_viewer_access_to_Commons_thro…
(6) Mockup B: Edit Button w/ tooltip - Same tool, but gives new users a tooltip to explain what it does
https://commons.wikimedia.org/wiki/File:Media_viewer_access_to_Commons_thro…
(7) Mockup C: ‘Details on Commons’ - Call to action helps new users get more details on Commons, and explains what it is
https://commons.wikimedia.org/wiki/File:Design_for_more_details_access_to_C…
P.S.: Let's also look for ways to remind power users that they can use keyboard shortcuts to bypass Media Viewer and access images files directly on Commons (e.g.: Ctrl-click or Shift-click).
_______________________________
Fabrice Florin
Product Manager, Multimedia
Wikimedia Foundation
http://en.wikipedia.org/wiki/User:Fabrice_Florin_(WMF)
Hi guys,
We would appreciate your advice on the Opt-out Feature for Media Viewer.
Our goal is to enable Media Viewer by default on all wikis when it the tool is ready.
But as recommended by many in our onwiki and IRC discussions, we would also like to provide a way for users to disable Media Viewer on a given site, so they can opt-out from this feature if I don't want it.
To that end, we are considering these different options:
A. 'Enabled' user preference
Provide a preference checkbox with Media Viewer enabled by default (e.g.: 'Show images in Media Viewer'). To disable MV, users can uncheck this preference.
• Pros: preferable from a UX point of view, indicates this is our recommended option, more user-friendy than JS gadget option below
• Cons: this approach has caused problems before, users may not want this option to be selected for them, adds to preference bloat issue
B. 'Disable' user preference
Provide a preference checkbox where Media Viewer can be disabled (e.g.: 'Disable Media Viewer'). To re-enable MV, users can uncheck that preference.
• Pros: addresses user concerns about pre-selection, more user-friendy than JS gadget option below
• Cons: unclear what Media Viewer is, confusing because you have to uncheck the preference to re-enable Media Viewer, adds to preference bloat issue
C. Javascript gadget or script
Provide a site-wide gadget (or personal JS script) that would let users disable Media Viewer.
• Pros: no preference bloat, no cache fragmentation, can simply ride on #263 and provide example JS code.
• Cons: not user-friendly (the gadget has to be installed manually), the bootstrap script would still get loaded.
Notes
• If we implement a user preference, the recommended location would be in the 'Appearances' section, under 'Files'.
• We should also let power users know that they can easily use Ctrl-click or Shift-click to bypass Media Viewer and access images the same way they used to before this feature was introduced. So even with Media Viewer enabled, there are shortcuts you can use to go directly to Commons if you like.
We would appreciate your advice on which of the options above would be most helpful for the majority of our users (not just power users).
Please respond via email on this list — or add your comments on this discussion page:
https://www.mediawiki.org/wiki/Talk:Multimedia/About_Media_Viewer#Feedback_…
Next week, we will ask your advice about other high-priority features for Media Viewer.
Thank you — and have a wonderful weekend!
Fabrice
_______________________________
Fabrice Florin
Product Manager, Multimedia
Wikimedia Foundation
http://en.wikipedia.org/wiki/User:Fabrice_Florin_(WMF)