On Fri, Aug 8, 2014 at 2:47 AM, MZMcBride z@mzmcbride.com wrote:
Yep. At least one of the on-wiki comments by a liaison made me do a double-take as it had the tone of "your call is very important to us, please stay on the line and a representative will be with you shortly."
Sure, sometimes in a liaison role all you're able to say is "we'll get back to you". But that doesn't really characterize the day-to-day work that Rachel's team is doing in supporting the development of products. Part of the reason we brought this group closer together with development teams is precisely so that there can be meaningful interactions. I see this kind of thread all the time:
https://en.wikipedia.org/w/index.php?title=Wikipedia:VisualEditor/Feedback&a...
Some part of the software changed, a user points out an inconsistency or issue that results, a CL responds, tracks the issue and talks with the PM about it as appropriate. This applies just as much to early-stage features like Flow. There are also regular "What would you like to see" kind of threads which are used for idea generation and prioritization.
But the Wikimedia Foundation took the lesson to be that it simply needed to move a bit more slowly, not more smartly.
Not really. If you take MediaViewer, here are some of the things that changed in addition to the pace of deployment:
- clearly established opt-in phase via Beta Features (which was created post-VE as a way to advertise new features) - built-in clicktracking and performance metrics from fairly early on - built-in user surveys - iterative, frequent testing of design prototypes - community liaison support from the beginning of development, working in partnership with the PM
That's not to say that there isn't plenty of room for improvement, including but not limited to:
- stronger success/failure metrics for any new feature - truly continuous qualitative and quantitative validation throughout the development lifecycle - staged rollouts for large wikis (%-of-audience based or otherwise) - improved microsurvey system consistently used for measuring user satisfaction
I don't think we'll ever get to a place whether we'll always have consensus about whether a feature should exist, but it should be possible to get closer to consensus about how to measure whether it does what it was built to do.
They say MediaViewer may one day be as feature-ful as the file description pages we've had for a long time (editing capability, oh my!). It makes little sense to create hobbled file description pages in JavaScript rather than addressing the actual issues that file description pages have, but this point seems to have gotten completely lost somewhere.
Not at all. The summary view you get in MV is just that: a summary. As you know, robust metadata support for Wikimedia Commons and locally hosted files is on the joint roadmap between the multimedia and Wikidata teams.
MV is first and foremost what it says on the tin: a media viewer. It gives you access to the image in a form nicely sized for your browser window, without leaving the page you're on, and pre-loads next/previous images for quicker access. Whether it should show advanced metadata at all or just refer back to the File: page is debatable.
Indeed, we're currently planning to user-test prototypes with readers that eliminate the metadata panel except for extended captions and which have a much more prominent "Details" link to the File: page. Early responses from community members we've spoken to about this have also been positive. This would alleviate issues with perceived munging of important templates/bits of data by more clearly giving each feature (File: page, lightbox) its purpose - we can then revisit advanced metadata integration later. We're not committing to that path yet, but we're exploring it as part of normal iterative improvement of the feature.
Erik