Hi all,
we have recently added some funnel [1] logging to UploadWizard. A nice
dashboard is in the works, but here are some preliminary results, showing
the number of virtual pageviews for each step of UploadWizard.
mysql:research@s1-analytics-slave.eqiad.wmnet [log]> select event_step,
count(*), count(*)/3623 as survival_rate from UploadWizardStep_8612364
group by event_step order by survival_rate desc;
+------------+----------+---------------+
| event_step | count(*) | survival_rate |
+------------+----------+---------------+
| tutorial | 3623 | 1.0000 |
| file | 3496 | 0.9649 |
| deeds | 2433 | 0.6715 |
| details | 2373 | 0.6550 |
| thanks | 2109 | 0.5821 |
+------------+----------+---------------+
This is based on about a day's worth of logs (25.5 hours) - the logging
code was deployed to Commons yesterday.
The big drop is apparently in the file upload step (almost 30% - well over
1000 uploads a day). Some of that might be intentional (upload caught by
badtitle filter etc), but even so the drop is huge. Given that that step is
rather simple from a UX point of view, it seems that upload bugs are a
bigger problem right now than design issues.
(The license selection - deeds -> details - on the other hand is
unexpectedly unproblematic; I would have expected it to be the main source
of confusion, but actually adding description etc. seems worse.)
The next step would be to log JS/upload errors, I suppose.
Also, it would be nice to know which dropoffs are final and which are
reloads/restarts. The Navigation Timing API can tell apart reloads and
normal navigation, alternatively we could maybe group by IP + useragent +
time bucket to find retries.
Thanks Yuri,
CC'ing Multimedia team
Maryana, this could be something interesting for the Mobile Web team
to look at to optimize image delivery.
Have you guys done any perf work around images?
--tomasz
On Thu, Jun 5, 2014 at 4:10 PM, Yuri Astrakhan <yastrakhan(a)wikimedia.org> wrote:
> The reduced quality images is now live in production. To see it for
> yourself, compare original with low quality images (253KB => 99.9KB, 60%
> reduction).
>
> The quality reduction is triggered by adding "qlow-" in front of the file
> name's pixel size.
>
> Continuing our previous discussion, now we need to figure out how to best
> use this feature. As covered before, there are two main approaches:
> * JavaScript rewrite - dynamically change <img> tag based on
> network/device/user preference conditions. Issues may include multiple
> downloads of the same image (if the browser starts the download before JS
> runs), parser cache fragmentation.
>
> * Varnish-based rewrite - varnish decides which image to server under the
> same URL. This approach requires Varnish to know everything needed to make a
> decision.
>
> Zero plans to go the first route, but if we make it mobile, or ever site
> wide, all the better.
>
> _______________________________________________
> Mobile-l mailing list
> Mobile-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>
Currently the file page provides a set of different image sizes for the
user to directly access. These sizes are usually width-based. However, for
tall images they are height-based. The thumbnail urls, which are used to
generate them pass only a width.
What this means is that tall images end up with arbitrary thumbnail widths
that don't follow the set of sizes meant for the file page. The end result
from an ops perspective is that we end up with very diverse widths for
thumbnails. Not a problem in itself, but the exposure of these random-ish
widths on the file page means that we can't set a different caching policy
for non-standard widths without affecting the images linked from the file
page.
I see two solutions to this problem, if we want to introduce different
caching tiers for thumbnail sizes that come from mediawiki and thumbnail
sizes that were requested by other things.
The first one would be to always keep the size progression on the file page
width-bound, even for soft-rotated images. The first drawback of this is
that for very skinny/very wide images the file size progression between the
sizes could become steep. The second drawback is that we'd often offer less
size options, because they'd be based on the smallest dimension.
The second option would be to change the syntax of the thumbnail urls in
order to allow height constraint. This is a pretty scary change.
If we don't do anything, it simply means that we'll have to apply the same
caching policy to every size smaller than 1280. We could already save quite
a bit of storage space by evicting non-standard sizes larger than that, but
sizes lower than 1280 would have to stay the way they are now.
Thoughts?
Greetings!
We invite you to join a discussion about Structured Data on Commons, to help us plan our next steps for this project.
The Structured Data initiative proposes to store and retrieve information for media files in machine-readable data on Wikimedia Commons, using Wikidata tools and practices, as described on our new project page (1).
The purpose of this project is to make it easier for users to read and write file information, and to enable developers to build better tools to view, search, edit, curate and use media files. To that end, we propose to investigate this opportunity together through community discussions and small experiments. If these initial tests are successful, we would develop new tools and practices for structured data, then work with our communities to gradually migrate unstructured data into a machine-readable format over time.
The Multimedia team and the Wikidata team are starting to plan this project together, in collaboration with many community volunteers active on Wikimedia Commons and other wikis. We had a truly inspiring roundtable discussion about Structured Data at Wikimania a few weeks ago, to define a first proposal together (2).
We would now like to extend this discussion to include more community members that might benefit from this initiative. Please take a moment to read the project overview on Commons, then let us know what you think, by answering some of the questions on its talk page (3).
We also invite you to join a Structured Data Q&A on Wednesday September 3 at 19:00 UTC, so we can discuss some of the details live in this IRC office hours chat. Please RSVP if you plan to attend (4).
Lastly, we propose to form small workgroups to investigate workflows, data structure, research, platform, features, migration and other open issues. If you are interested in contributing to one of these workgroups, we invite you to sign up on directly on our hub page (5) -- and help start a sub-page for your workgroup.
We look forward to some productive discussions with you in coming weeks. In previous roundtables, many of you told us this is the most important contribution that our team can make to support multimedia in coming years. We heard you loud and clear and are happy to devote more resources to bring it to life, with your help.
We are honored to be working with the Wikidata team and talented community members like you to take on this challenge, improve our infrastructure and provide a better experience for all our users.
Onward!
Fabrice — for the Structured Data team
(1) Structured Data Hub on Commons:
https://commons.wikimedia.org/wiki/Commons:Structured_data
(2) Structured Data Slides:
https://commons.wikimedia.org/wiki/File:Structured_Data_-_Slides.pdf
(3) Structured Data Talk Page:
https://commons.wikimedia.org/wiki/Commons_talk:Structured_data
(4) Structured Data Q&A (IRC chat on Sep. 3):
https://commons.wikimedia.org/wiki/Commons:Structured_data#Discussions
(5) Structured Data Workgroups:
https://commons.wikimedia.org/wiki/Commons:Structured_data#Workgroups
_______________________________
Fabrice Florin
Product Manager, Multimedia
Wikimedia Foundation
https://www.mediawiki.org/wiki/User:Fabrice_Florin_(WMF)
Mixins in OOjs UI have always had, shall we say, "strange" names.
Popuppable is my personal favorite, but the most strange thing about them
has always been the lack of correlation between their name and what it is
that they actually do. Furthermore, Alex came across a situation where the
convention of providing an element to a mixin at construction is not always
possible.
I've written a patch[1][2][3] which does the following:
- Mixins are now named according to what they do[2], using the "ed"
suffix if the mixin adds/manages attributes, "able" if it adds/manages
behavior and no suffix if it adds content.
- Mixins no longer take a required element argument, but do still allow
the element to be passed through the config options
- Mixins use a set{Type}Element method to set and even change the
element being targeted by the mixin - this is called in the constructor
with an overridable default, but can also be called again and again
Attribute and behavior mixins always operate on this.$element by deafault.
Content mixins always generate an element to operate on by default. Again,
in both cases the element being initially targeted can be configured using
the config object.
This division was made specifically to reduce or eliminate the need for
using this.$( '<{tagName}' ); when invoking the mixin constructor, and
instead doing what was being done most of the time automatically.
The rename will hopefully not cause too much confusion. It's important to
note that both the JavaScript and CSS classes have been updated.
Roan is reviewing the patches and they will probably be merged shortly. If
you know of any code that may be affected by this change but has not been
considered in the patches mentioned, please let me know.
- Trevor
[1] https://gerrit.wikimedia.org/r/#/c/157274
[2] https://gerrit.wikimedia.org/r/#/c/157286
[3] https://gerrit.wikimedia.org/r/#/c/157285
[4] Table of classes that have been renamed
ButtonedElement ButtonElementIconedElement IconElementIndicatedElement
IndicatorElementLabeledElement LabelElementPopuppableElement PopupElement
FlaggableElement FlaggedElement
Heja,
I think mtraceur pinged me on IRC a while ago, proposing an UploadWizard
bug triage. If I remembered correctly:
Any specific aspects or topic in mind? Some general "let's retry random
reports if they are still valid"? Or more looking at priorities of
tickets?
If retesting reports: Would retesting take place on
http://commons.wikimedia.beta.wmflabs.org/wiki/Special:UploadWizard , I
assume that the broken first image and the missing templates exposed in
the summary are "alright"? (Asking because this might confuse potential
volunteers trying to reproduce issues there.)
Severity x priority table of open tickets (might come handy):
https://bugzilla.wikimedia.org/report.cgi?x_axis_field=priority&y_axis_fiel…
Cheers,
andre
[Please CC me on replies; I'm not subscribed to multimedia@]
--
Andre Klapper | Wikimedia Bugwrangler
http://blogs.gnome.org/aklapper/
Thanks, Gerard!
This seems like a great idea.
I believe that Liam Wyatt and Andrew Lih are reaching out to the project leader, to see if he needs help uploading some of that content to Commons.
Music to my ears :)
Fabrice
On Aug 29, 2014, at 2:34 AM, Gerard Meijssen <gerard.meijssen(a)gmail.com> wrote:
> Hoi,
> This article is of both interest to Commons and Wikipedia.. It is awesome.
> Thanks,
> GerardM
>
> http://www.bbc.com/news/technology-28976849
> _______________________________________________
> Commons-l mailing list
> Commons-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/commons-l
_______________________________
Fabrice Florin
Product Manager, Multimedia
Wikimedia Foundation
https://www.mediawiki.org/wiki/User:Fabrice_Florin_(WMF)
Hello friends of multimedia,
We would like to invite you to join a global community consultation about Media Viewer.
Please take a moment to join the discussion and add your suggestions for improvement here:
https://meta.wikimedia.org/wiki/Community_Engagement_(Product)/Media_Viewer…
The goal of this consultation is to review the status of this project and identify any critical issues that still need to be addressed -- so we can plan our next steps based on your feedback.
The consultation will be open until September 7th. If agreed upon critical issues cannot be resolved in the near term, the Wikimedia Foundation will temporarily move the feature back into opt-in beta globally.
Here’s our latest Media Viewer Improvements plan, where our team will post regular updates on our planned development tasks:
https://www.mediawiki.org/wiki/Multimedia/Media_Viewer/Improvements
Note that the proposed tasks on that page are still preliminary, and may be adjusted based on community feedback and ongoing user studies.
We look forward to hearing from you on the consultation page.
Regards as ever,
Fabrice
_______________________________
Fabrice Florin
Product Manager, Multimedia
Wikimedia Foundation
https://www.mediawiki.org/wiki/User:Fabrice_Florin_(WMF)
Forwarding to a relevant list -- I hope this is useful feedback to the Multimedia team itself
svetlana
----- Original message -----
From: John Mark Vandenberg <jayvdb(a)gmail.com>
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Subject: Re: [Wikimedia-l] Next steps regarding WMF<->community disputes about deployments
Date: Mon, 25 Aug 2014 14:37:19 +1000
On Mon, Aug 25, 2014 at 2:07 PM, Marc A. Pelletier <marc(a)uberbox.org> wrote:
> On 08/24/2014 11:19 PM, Pine W wrote:
>> I have
>> heard people say "don't force an interface change on me that I don't think
>> is an improvement."
>
> I do not recall a recent interface change deployment that wasn't
> accompanied with, at the very least, some method of opting out. Did I
> miss one?
Did you try opting out of MediaViewer on the mobile version?
I think the response that I received confirmed it wasnt possible.
Per-user opt-out aside, the WMF was still forcing an interface change
onto the community at large. With VE, the WMF needed the community to
add TemplateData to all templates to help the newbies who were using
VE; with MV, the WMF needed the community to 'tag' images which
shouldnt be shown in the MV, and there is an ongoing need for the
community to 'fix' the image page syntax in order for the information
to display correctly to the end users in MV.
In both cases, significant amounts of volunteer time is required to
avoid a bad user experience.
WMF needs 'buy-in' for that, if it wants volunteers to be happy
volunteers while doing mundane work to make the new software suck
less.
--
John Vandenberg
_______________________________________________
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, <mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
FYI - User:TheDJ has made a very handy tool that will let you see the
machine-readable data (per COM:MRD) for a given file on the File: page.
This helps ensure that any automated re-use can be done in a manner that's
license compliant and consistent with authors' wishes. To give it a spin,
import this guy into your common.js:
https://commons.wikimedia.org/wiki/User:TheDJ/datacheck.js
This should definitely help leading into the systematic structured data
efforts. Still early days and would make a nice gadget down the line :)
Erik