[Foundation-l] Vector, a year after

Erik Moeller erik at wikimedia.org
Mon Apr 4 20:14:04 UTC 2011


2011/4/4 Rodan Bury <bury.rodan at gmail.com>:
> As Erik Möller said the qualitative analysis is the user testing with a few
> dozens of users. This user testing was conducted several times during the
> development cycle, and it was thorough. The best user testing consist of no
> more than 30 users, and I can tell the user testing conducted by the
> Usability Team is high quality and standard.

See also:
http://en.wikipedia.org/wiki/Usability_testing#How_many_users_to_test.3F
which has links to relevant research.

Note that we did both in-person and remote testing. Remote tests were
still focused on US subjects for a variety of reasons (need for
reliable connectivity, increasing recruiting and scheduling
complexity, etc.). Ultimately I hope chapters can get more involved in
on-the-ground user testing in additional locations to surface more
culture/language-specific issues.

> As for the quantitative analysis, the one made during the beta testing of
> Vector was detailed. It clearly showed that most users - and especially
> newbies - preferred Vector over Monobook (retention rates of 70 - 80 % and
> more).

That's correct. See
http://usability.wikimedia.org/wiki/Beta_Feedback_Survey for details
which included quite a bit of language-specific analysis and follow-up
bugfixes. It was the largest feedback collection regarding a software
feature we've ever done and surfaced key issues with specific
languages, many of which were resolved.

> Now, the Usability Initiative endend in April 2010, soon after the
> deployment  of Vector to all Wikimedia Wikis. The Wikimedia Foundation did
> not place usability as one of their main priorities

That's not correct. Firstly, we continued deployments and bug fixes
after the grant period. As a reminder, full deployment to all projects
in all languages was only completed September 1 as the "Phase V" of
the roll-out. A lot of this time was spent gathering data and feedback
from these remaining projects/languages regarding project or
language-specific issues, promoting localization work, etc. Wikimedia
is a big and complex beast (or bestiary).

There's also the separate usability initiative concerning multimedia
upload, which is ongoing (see
http://techblog.wikimedia.org/2011/03/uploadwizard-nearing-1-0/ for
the most recent update).

Post-Vector, there were three primary projects that kept the folks who
had worked on the original grant-funded project busy:

1) After the deployments, the engineering team working on the
initiative asked to be able to spend time on re-architecting the
JavaScript/CSS delivery system for MediaWiki, as a necessary
precondition for more complex software feature. The result was the
ResourceLoader project:
http://www.mediawiki.org/wiki/ResourceLoader which is now deployed to
all WMF projects.

2) The Article Feedback tool. With the Public Policy Initiative we had
taken on the largest project ever to improve content quality in
Wikipedia, and Sue asked us to implement a reader-driven article
quality assessment tool in order to provide additional measures of
success for the project. We also needed article feedback data in order
to measure quality change over time on an ongoing basis for other
quality-related initiatives. The tool is in production use on a few
thousand articles and we're still analyzing the data we're getting
before making a final decision on wider deployment. See
http://www.mediawiki.org/wiki/Article_feedback/Public_Policy_Pilot/Early_Data
for our findings to-date.

3) MediaWiki 1.17. One of the side-effects of focusing on usability
for so long had been that MediaWiki core code review was neglected and
backlogged, much to the dissatisfaction of the volunteer developer
community. A lot of joint effort was put into clearing the code review
backlog to ensure that we could push out a new MediaWiki release,
which happened in February. Balancing strategic projects with code
review and integration for volunteer-developed code (which in some
cases can be quite complex and labor-intensive) is still very much a
work-in-progress.

Nimish specifically also spent a lot of his time helping to support
the development and piloting of OpenWebAnalytics as a potential
analytics framework to gather better real-time data about what's
happening in Wikimedia projects, precisely so we can better measure
the effects of the interventions we're making.

The going-forward product development priorities of WMF (not including
analytics work) are explained in more detail in the product
whitepaper. <http://strategy.wikimedia.org/wiki/Product_Whitepaper>

Mind you, I'm not at all satisfied with the rate of our progress, but
that's generally not because "we're not making X or Y high enough of a
priority" or "we suck and we don't know what we're doing", but because
we simply don't have enough engineers to do all the development work
that it takes to really support a huge and important thing like
Wikimedia well. We're continuing to hire engineers in SF and
contractors around the world, and we're actively looking into an
additional engineering presence in a lower-salary region as part of
the 11-12 budgeting process that's currently underway.

-- 
Erik Möller
Deputy Director, Wikimedia Foundation

Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate



More information about the foundation-l mailing list