The Vector skin, the main product of the Usability Initiative, was deployed on Wikimedia projects in April 2010.
Quoting usability.wikimedia.org: "The goal of this initiative is to measurably increase the usability of Wikipedia for new contributors by improving the underlying software on the basis of user behavioral studies, thereby reducing barriers to public participation."
In the year that passed since then, did anyone measure whether the usability of Wikipedia for new contributors increased?
-- Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי http://aharoni.wordpress.com "We're living in pieces, I want to live in peace." - T. Moore
I did a preliminary measure, and it actually showed a decline, starting the exact week it was implemented on nlwiki :( However, this preliminary measure was unscientific, not precise and would need better testing/measuring.
Lodewijk
2011/3/31 Amir E. Aharoni amir.aharoni@mail.huji.ac.il
The Vector skin, the main product of the Usability Initiative, was deployed on Wikimedia projects in April 2010.
Quoting usability.wikimedia.org: "The goal of this initiative is to measurably increase the usability of Wikipedia for new contributors by improving the underlying software on the basis of user behavioral studies, thereby reducing barriers to public participation."
In the year that passed since then, did anyone measure whether the usability of Wikipedia for new contributors increased?
-- Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי http://aharoni.wordpress.com "We're living in pieces, I want to live in peace." - T. Moore
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On 31 March 2011 22:35, Lodewijk lodewijk@effeietsanders.org wrote:
I did a preliminary measure, and it actually showed a decline, starting the exact week it was implemented on nlwiki :( However, this preliminary measure was unscientific, not precise and would need better testing/measuring.
An immeadiate decline isn't surprising. Some people won't have liked the new skin (perhaps simply because they weren't used to it but were used to the old one, so for them the old one was easier to use) and will have left because of it. It's long term trends that we need to be looking at.
2011/3/31 Amir E. Aharoni amir.aharoni@mail.huji.ac.il:
The Vector skin, the main product of the Usability Initiative, was deployed on Wikimedia projects in April 2010.
Quoting usability.wikimedia.org: "The goal of this initiative is to measurably increase the usability of Wikipedia for new contributors by improving the underlying software on the basis of user behavioral studies, thereby reducing barriers to public participation."
In the year that passed since then, did anyone measure whether the usability of Wikipedia for new contributors increased?
The usability initiative was accompanied by three qualitative studies:
http://usability.wikimedia.org/wiki/Usability,_Experience,_and_Evaluation_St... http://usability.wikimedia.org/wiki/Usability_and_Experience_Study http://usability.wikimedia.org/wiki/Usability,_Experience,_and_Progress_Stud...
Our studies validated that the changes we made did indeed by and large have the intended effect of simplifying the experience of new users. With that said, the aggregate editing trends continue to be troubling. See, for example, this page for a comparison of active editors across languages:
http://stats.wikimedia.org/EN/TablesWikipediansEditsGt5.htm
.. and, of course, the editor trends study and the New Wikipedians numbers. But, these larger trends aren't purely technical trends -- they're social trends as well, and it's entirely possible that no amount of technical improvement is going to even make a meaningful dent unless/until we also make progress on making Wikimedia projects more open and more welcoming.
We haven't deployed some of the last-stage features of the project yet. These include an in-editor outline of the article headings, a tabbed view of preview/edit, and a default collapsed view of templates. Making template collapsing work cleanly in all browsers and for all document operations turned out to be very hard (due to the wrangling required to make the browser's rich-text-editor behave essentially like a beefed-up code editor), so we may not ever add that feature to a wikitext editor (as opposed to a visual editor). The other two features are likely doable with some more effort, but we're prioritizing them against other improvements and the visual editor effort itself.
So, in sum, 1) our qualitative research has shown an improvement for new users, 2) the quantitative trends are troubling, and it's not demonstrable that we've made a difference either way in the larger trends (which aren't purely technical but also social trends), 3) there's still quite a bit of code that we may end up picking up again but that's not currently running on WMF projects. I'm happy that we've done Vector as a first step, but it's just that - a first step.
Well, I am very sure I joined Wikimedia due to the change in skin and liked the new skin as compared to Monobook.
Regards,Hydrizhttp://simple.wikipedia.org/wiki/User:Hydriz
From: erik@wikimedia.org Date: Thu, 31 Mar 2011 19:24:48 -0700 To: foundation-l@lists.wikimedia.org Subject: Re: [Foundation-l] Vector, a year after
2011/3/31 Amir E. Aharoni amir.aharoni@mail.huji.ac.il:
The Vector skin, the main product of the Usability Initiative, was deployed on Wikimedia projects in April 2010.
Quoting usability.wikimedia.org: "The goal of this initiative is to measurably increase the usability of Wikipedia for new contributors by improving the underlying software on the basis of user behavioral studies, thereby reducing barriers to public participation."
In the year that passed since then, did anyone measure whether the usability of Wikipedia for new contributors increased?
The usability initiative was accompanied by three qualitative studies:
http://usability.wikimedia.org/wiki/Usability,_Experience,_and_Evaluation_St... http://usability.wikimedia.org/wiki/Usability_and_Experience_Study http://usability.wikimedia.org/wiki/Usability,_Experience,_and_Progress_Stud...
Our studies validated that the changes we made did indeed by and large have the intended effect of simplifying the experience of new users. With that said, the aggregate editing trends continue to be troubling. See, for example, this page for a comparison of active editors across languages:
http://stats.wikimedia.org/EN/TablesWikipediansEditsGt5.htm
.. and, of course, the editor trends study and the New Wikipedians numbers. But, these larger trends aren't purely technical trends -- they're social trends as well, and it's entirely possible that no amount of technical improvement is going to even make a meaningful dent unless/until we also make progress on making Wikimedia projects more open and more welcoming.
We haven't deployed some of the last-stage features of the project yet. These include an in-editor outline of the article headings, a tabbed view of preview/edit, and a default collapsed view of templates. Making template collapsing work cleanly in all browsers and for all document operations turned out to be very hard (due to the wrangling required to make the browser's rich-text-editor behave essentially like a beefed-up code editor), so we may not ever add that feature to a wikitext editor (as opposed to a visual editor). The other two features are likely doable with some more effort, but we're prioritizing them against other improvements and the visual editor effort itself.
So, in sum, 1) our qualitative research has shown an improvement for new users, 2) the quantitative trends are troubling, and it's not demonstrable that we've made a difference either way in the larger trends (which aren't purely technical but also social trends), 3) there's still quite a bit of code that we may end up picking up again but that's not currently running on WMF projects. I'm happy that we've done Vector as a first step, but it's just that - a first step.
-- Erik Möller Deputy Director, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On 1 April 2011 15:46, Hydriz Wikipedia admin@wikisorg.tk wrote:
Well, I am very sure I joined Wikimedia due to the change in skin and liked the new skin as compared to Monobook.
I've been using it on our work intranet for new wikis. It's gained unsolicited positive comment. Vector looks nice.
- d.
On Fri, Apr 1, 2011 at 09:10, David Gerard dgerard@gmail.com wrote:
I've been using it on our work intranet for new wikis. It's gained unsolicited positive comment. Vector looks nice.
Do we know how many editors still use Monobook?
Sarah
I do, but that's mostly because I like a data-dense interface and smaller font, and probably because I'm comfortable enough with it not to need to change personally. Then again I like classic menu on Windows too.
It's quite likely that most people we want to attract would like a more modern style.
FT2
On Fri, Apr 1, 2011 at 4:29 PM, Sarah slimvirgin@gmail.com wrote:
Do we know how many editors still use Monobook?
I'd like a vector-monobook hybrid.
I miss the boxes in the sidebar, the size and color of the old logo and the footer of Monobook.
I hate the star as watch icon, it should be an eye.
On 04/01/11 8:29 AM, Sarah wrote:
On Fri, Apr 1, 2011 at 09:10, David Gerarddgerard@gmail.com wrote:
I've been using it on our work intranet for new wikis. It's gained unsolicited positive comment. Vector looks nice.
Do we know how many editors still use Monobook?
I still use Classic, with Cologne Blue on my Wikisource page. I find the small sans-serif type face in Vector more difficult to read and too condensed.
Ray
The analysis of the qualitative and quantitative results of the Usability Initiative is not a question anybody can answer. Comments like "I personally prefer monobook" (fictional example) does not help to make an analysis based on facts.
Erik Möller's answer is professional and detailed in this regard.
I could add a little summary of the goals and priorities of the Usability Initiative as I understand them, which will help us understand its result. The Vector was a high priority change rated "easy to do", and as such they focused on deploying it first. It is aimed at readers and editors, and the result was new editors felt more comfortable when clicking on the edit link and attempting to edit. However, most proposals made to improve the editing process itself we not deployed. The editing toolbar improved the situation too, but is not sufficient in itself to make the huge difference in editing trends that was expected. Wikipedia's editing interface is still way harder than modern editing interfaces on the web, and so the situation did not change significantly.
However, the Usability Initiative was going in the right direction. The prototypes that improves significantly the editing process needs to be brought to completion. Then it would be interesting to observe the quantitative results on the editing trends.
The Multimedia Initiative - which also aims to improve usability and is made with a similar process - will have its prototype deployed soon. This will provide significant results in the number of uploads at commons for sure. Rodan Bury
2011/4/2 Ray Saintonge saintonge@telus.net
On 04/01/11 8:29 AM, Sarah wrote:
On Fri, Apr 1, 2011 at 09:10, David Gerarddgerard@gmail.com wrote:
I've been using it on our work intranet for new wikis. It's gained unsolicited positive comment. Vector looks nice.
Do we know how many editors still use Monobook?
I still use Classic, with Cologne Blue on my Wikisource page. I find the small sans-serif type face in Vector more difficult to read and too condensed.
Ray
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
2011/4/2 Rodan Bury bury.rodan@gmail.com:
The analysis of the qualitative and quantitative results of the Usability Initiative is not a question anybody can answer. Comments like "I personally prefer monobook" (fictional example) does not help to make an analysis based on facts.
Erik Möller's answer is professional and detailed in this regard.
I could add a little summary of the goals and priorities of the Usability Initiative as I understand them, which will help us understand its result. The Vector was a high priority change rated "easy to do", and as such they focused on deploying it first. It is aimed at readers and editors, and the result was new editors felt more comfortable when clicking on the edit link and attempting to edit.
Well, that was my original question: Did they? Because, frankly, Erik's reply hardly answers this question. The most recent study he cited was a study of 10 San Francisco residents. What about other cities, other countries, other languages, other projects?
I understand that WMF's resources are limited, but the development and the deployment of Vector did cost some money and also forced a lot of volunteers in English and in all other language projects to make adjustments to their sites. Measuring volunteer effort is harder to measure than money, but it's certainly not negligible.
And i am wondering whether anybody measured how well these resources were spent. It's not that i'm strongly against Vector; it's nice and all, but for the last few weeks i switched back to Monobook and to the old editing toolbar and i don't feel any difference. But since that's just me, i might be wrong, so i am asking again: 10 people in San Francisco? Is that all the measurement that was conducted after the switch? And did anybody try to measure the influence of Monobook and Vector on editor *retention*, another hot topic recently?
-- Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי http://aharoni.wordpress.com "We're living in pieces, I want to live in peace." - T. Moore
On 4 April 2011 16:20, Amir E. Aharoni amir.aharoni@mail.huji.ac.il wrote:
I understand that WMF's resources are limited, but the development and the deployment of Vector did cost some money and also forced a lot of volunteers in English and in all other language projects to make adjustments to their sites. Measuring volunteer effort is harder to measure than money, but it's certainly not negligible.
If this is a valid argument - that technical changes should not be made if it would make work for other volunteers - then God forbid development continue on MediaWiki. See the current thread on wikitech-l about how chronically broken most site JavaScript is and what to do about the problem, given that freezing MediaWiki in perpetuity is really just not going to happen.
- d.
Thread title?
FT2
On Mon, Apr 4, 2011 at 4:27 PM, David Gerard dgerard@gmail.com wrote:
See the current thread on wikitech-l about how chronically broken most site JavaScript is and what to do about the problem
On 4 April 2011 16:33, FT2 ft2.wiki@gmail.com wrote:
Thread title?
"Focus on sister projects". Lots of the archive page as of today:
http://lists.wikimedia.org/pipermail/wikitech-l/2011-April/
- d.
I'm not seeing discussion of "chronically broken" code there. Just discussion of redundant code (due to 1.17) and cleanup. Any chance of a pointer to something that sums up the "chronically broken" nature of site script?
FT2
On Mon, Apr 4, 2011 at 4:41 PM, David Gerard dgerard@gmail.com wrote:
On 4 April 2011 16:33, FT2 ft2.wiki@gmail.com wrote:
Thread title?
"Focus on sister projects". Lots of the archive page as of today:
http://lists.wikimedia.org/pipermail/wikitech-l/2011-April/
- d.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On 4 April 2011 17:20, FT2 ft2.wiki@gmail.com wrote:
I'm not seeing discussion of "chronically broken" code there. Just discussion of redundant code (due to 1.17) and cleanup. Any chance of a pointer to something that sums up the "chronically broken" nature of site script?
e.g. http://lists.wikimedia.org/pipermail/wikitech-l/2011-April/052600.html http://lists.wikimedia.org/pipermail/wikitech-l/2011-April/052603.html http://lists.wikimedia.org/pipermail/wikitech-l/2011-April/052646.html http://lists.wikimedia.org/pipermail/wikitech-l/2011-April/052608.html <- excellent example http://lists.wikimedia.org/pipermail/wikitech-l/2011-April/052616.html <- survey of the mess http://lists.wikimedia.org/pipermail/wikitech-l/2011-April/052624.html
Amir spotted it okay ...
- d.
2011/4/4 David Gerard dgerard@gmail.com
On 4 April 2011 16:20, Amir E. Aharoni amir.aharoni@mail.huji.ac.il wrote:
I understand that WMF's resources are limited, but the development and the deployment of Vector did cost some money and also forced a lot of volunteers in English and in all other language projects to make adjustments to their sites. Measuring volunteer effort is harder to measure than money, but it's certainly not negligible.
If this is a valid argument - that technical changes should not be made if it would make work for other volunteers - then God forbid development continue on MediaWiki.
Of course every change makes volunteers work and it's perfectly understandable. The problem is that sometimes it is justified and sometimes it is not. As nifty as Vector, SimpleSearch and the new toolbar are, i have doubts about their contributions to Wikimedia's mission. But again, i might be wrong, and that's why i am asking what measurements were made.
See the current thread on wikitech-l about how chronically broken most site JavaScript is and what to do about the problem, given that freezing MediaWiki in perpetuity is really just not going to happen.
... I am following it closely. It is, in fact, strongly related to this topic: Polishing and modernizing gadgets developed by volunteer JS gurus in local projects and exporting them to other projects and languages is a much better investment of time and money, simply because it is quite certain that these gadgets were created to answer real needs of real editors, whereas Vector grew out of very small usability studies.
For example, in the Hebrew Wikipedia there was a Search and Replace gadget long before the advent of Vector's Search and Replace dialog. It was developed due to popular demand, bottom-up, by a volunteer, and - here's the scariest part - without any grants. It is still used in the Hebrew Wikipedia, probably much more often than the Vector thingy, which is still rather useless due to bugs such as 20919 and 22801.
-- Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי http://aharoni.wordpress.com "We're living in pieces, I want to live in peace." - T. Moore
2011/4/4 Amir E. Aharoni amir.aharoni@mail.huji.ac.il
2011/4/4 David Gerard dgerard@gmail.com
On 4 April 2011 16:20, Amir E. Aharoni amir.aharoni@mail.huji.ac.il
wrote:
I understand that WMF's resources are limited, but the development and the deployment of Vector did cost some money and also forced a lot of volunteers in English and in all other language projects to make adjustments to their sites. Measuring volunteer effort is harder to measure than money, but it's certainly not negligible.
If this is a valid argument - that technical changes should not be made if it would make work for other volunteers - then God forbid development continue on MediaWiki.
Of course every change makes volunteers work and it's perfectly understandable. The problem is that sometimes it is justified and sometimes it is not. As nifty as Vector, SimpleSearch and the new toolbar are, i have doubts about their contributions to Wikimedia's mission. But again, i might be wrong, and that's why i am asking what measurements were made.
See the current thread on wikitech-l about how chronically broken most site JavaScript is and what to do about the problem, given that freezing MediaWiki in perpetuity is really just not going to happen.
... I am following it closely. It is, in fact, strongly related to this topic: Polishing and modernizing gadgets developed by volunteer JS gurus in local projects and exporting them to other projects and languages is a much better investment of time and money, simply because it is quite certain that these gadgets were created to answer real needs of real editors, whereas Vector grew out of very small usability studies.
For example, in the Hebrew Wikipedia there was a Search and Replace gadget long before the advent of Vector's Search and Replace dialog. It was developed due to popular demand, bottom-up, by a volunteer, and
- here's the scariest part - without any grants. It is still used in
the Hebrew Wikipedia, probably much more often than the Vector thingy, which is still rather useless due to bugs such as 20919 and 22801.
-- Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי http://aharoni.wordpress.com "We're living in pieces, I want to live in peace." - T. Moore
As Erik Möller said the qualitative analysis is the user testing with a few dozens of users. This user testing was conducted several times during the development cycle, and it was thorough. The best user testing consist of no more than 30 users, and I can tell the user testing conducted by the Usability Team is high quality and standard.
As for the quantitative analysis, the one made during the beta testing of Vector was detailed. It clearly showed that most users - and especially newbies - preferred Vector over Monobook (retention rates of 70 - 80 % and more).
Now, the Usability Initiative endend in April 2010, soon after the deployment of Vector to all Wikimedia Wikis. The Wikimedia Foundation did not place usability as one of their main priorities, and that was a mistake on my opinion. And so no quantitative analysis after the deployment of Vector was made. Several projects of the Usability Initiative became frozen. Since no more information is given on this topic, it now raises doubts in the public's minds about the quality of the Usability Initiative itself. You're certainly not the only one thinking this way. This is not the fault of the Usability Initiative however, they have done a good job the whole time. The Usability Initiative was timed to end abruptly because it was funded by a grant, and it's the Wikimedia Foundation's fault for not carrying on the Usability Initiative.
As for the bugs you point out (20919https://bugzilla.wikimedia.org/show_bug.cgi?id=20919and 22801 https://bugzilla.wikimedia.org/show_bug.cgi?id=22801) they are now the responsibility of MediaWiki developers to be fixed. And we all know that the Wikimedia Foundation doesn't employ nearly enough developers to fix such bugs. The amount of bug reports that needs to be addressed just keeps piling up. It's a shame.
I hope it clarifies the situation. Cheers, Rodan Bury
2011/4/4 Rodan Bury bury.rodan@gmail.com:
As Erik Möller said the qualitative analysis is the user testing with a few dozens of users. This user testing was conducted several times during the development cycle, and it was thorough. The best user testing consist of no more than 30 users, and I can tell the user testing conducted by the Usability Team is high quality and standard.
See also: http://en.wikipedia.org/wiki/Usability_testing#How_many_users_to_test.3F which has links to relevant research.
Note that we did both in-person and remote testing. Remote tests were still focused on US subjects for a variety of reasons (need for reliable connectivity, increasing recruiting and scheduling complexity, etc.). Ultimately I hope chapters can get more involved in on-the-ground user testing in additional locations to surface more culture/language-specific issues.
As for the quantitative analysis, the one made during the beta testing of Vector was detailed. It clearly showed that most users - and especially newbies - preferred Vector over Monobook (retention rates of 70 - 80 % and more).
That's correct. See http://usability.wikimedia.org/wiki/Beta_Feedback_Survey for details which included quite a bit of language-specific analysis and follow-up bugfixes. It was the largest feedback collection regarding a software feature we've ever done and surfaced key issues with specific languages, many of which were resolved.
Now, the Usability Initiative endend in April 2010, soon after the deployment of Vector to all Wikimedia Wikis. The Wikimedia Foundation did not place usability as one of their main priorities
That's not correct. Firstly, we continued deployments and bug fixes after the grant period. As a reminder, full deployment to all projects in all languages was only completed September 1 as the "Phase V" of the roll-out. A lot of this time was spent gathering data and feedback from these remaining projects/languages regarding project or language-specific issues, promoting localization work, etc. Wikimedia is a big and complex beast (or bestiary).
There's also the separate usability initiative concerning multimedia upload, which is ongoing (see http://techblog.wikimedia.org/2011/03/uploadwizard-nearing-1-0/ for the most recent update).
Post-Vector, there were three primary projects that kept the folks who had worked on the original grant-funded project busy:
1) After the deployments, the engineering team working on the initiative asked to be able to spend time on re-architecting the JavaScript/CSS delivery system for MediaWiki, as a necessary precondition for more complex software feature. The result was the ResourceLoader project: http://www.mediawiki.org/wiki/ResourceLoader which is now deployed to all WMF projects.
2) The Article Feedback tool. With the Public Policy Initiative we had taken on the largest project ever to improve content quality in Wikipedia, and Sue asked us to implement a reader-driven article quality assessment tool in order to provide additional measures of success for the project. We also needed article feedback data in order to measure quality change over time on an ongoing basis for other quality-related initiatives. The tool is in production use on a few thousand articles and we're still analyzing the data we're getting before making a final decision on wider deployment. See http://www.mediawiki.org/wiki/Article_feedback/Public_Policy_Pilot/Early_Dat... for our findings to-date.
3) MediaWiki 1.17. One of the side-effects of focusing on usability for so long had been that MediaWiki core code review was neglected and backlogged, much to the dissatisfaction of the volunteer developer community. A lot of joint effort was put into clearing the code review backlog to ensure that we could push out a new MediaWiki release, which happened in February. Balancing strategic projects with code review and integration for volunteer-developed code (which in some cases can be quite complex and labor-intensive) is still very much a work-in-progress.
Nimish specifically also spent a lot of his time helping to support the development and piloting of OpenWebAnalytics as a potential analytics framework to gather better real-time data about what's happening in Wikimedia projects, precisely so we can better measure the effects of the interventions we're making.
The going-forward product development priorities of WMF (not including analytics work) are explained in more detail in the product whitepaper. http://strategy.wikimedia.org/wiki/Product_Whitepaper
Mind you, I'm not at all satisfied with the rate of our progress, but that's generally not because "we're not making X or Y high enough of a priority" or "we suck and we don't know what we're doing", but because we simply don't have enough engineers to do all the development work that it takes to really support a huge and important thing like Wikimedia well. We're continuing to hire engineers in SF and contractors around the world, and we're actively looking into an additional engineering presence in a lower-salary region as part of the 11-12 budgeting process that's currently underway.
On Mon, Apr 4, 2011 at 22:14, Erik Moeller erik@wikimedia.org wrote:
2011/4/4 Rodan Bury bury.rodan@gmail.com:
As Erik Möller said the qualitative analysis is the user testing with a few dozens of users. This user testing was conducted several times during the development cycle, and it was thorough. The best user testing consist of no more than 30 users, and I can tell the user testing conducted by the Usability Team is high quality and standard.
See also: http://en.wikipedia.org/wiki/Usability_testing#How_many_users_to_test.3F which has links to relevant research.
Note that we did both in-person and remote testing. Remote tests were still focused on US subjects for a variety of reasons (need for reliable connectivity, increasing recruiting and scheduling complexity, etc.). Ultimately I hope chapters can get more involved in on-the-ground user testing in additional locations to surface more culture/language-specific issues.
As for the quantitative analysis, the one made during the beta testing of Vector was detailed. It clearly showed that most users - and especially newbies - preferred Vector over Monobook (retention rates of 70 - 80 % and more).
That's correct. See http://usability.wikimedia.org/wiki/Beta_Feedback_Survey for details which included quite a bit of language-specific analysis and follow-up bugfixes. It was the largest feedback collection regarding a software feature we've ever done and surfaced key issues with specific languages, many of which were resolved.
Now, the Usability Initiative endend in April 2010, soon after the deployment of Vector to all Wikimedia Wikis. The Wikimedia Foundation did not place usability as one of their main priorities
That's not correct. Firstly, we continued deployments and bug fixes after the grant period. As a reminder, full deployment to all projects in all languages was only completed September 1 as the "Phase V" of the roll-out. A lot of this time was spent gathering data and feedback from these remaining projects/languages regarding project or language-specific issues, promoting localization work, etc. Wikimedia is a big and complex beast (or bestiary).
There's also the separate usability initiative concerning multimedia upload, which is ongoing (see http://techblog.wikimedia.org/2011/03/uploadwizard-nearing-1-0/ for the most recent update).
Post-Vector, there were three primary projects that kept the folks who had worked on the original grant-funded project busy:
- After the deployments, the engineering team working on the
initiative asked to be able to spend time on re-architecting the JavaScript/CSS delivery system for MediaWiki, as a necessary precondition for more complex software feature. The result was the ResourceLoader project: http://www.mediawiki.org/wiki/ResourceLoader which is now deployed to all WMF projects.
- The Article Feedback tool. With the Public Policy Initiative we had
taken on the largest project ever to improve content quality in Wikipedia, and Sue asked us to implement a reader-driven article quality assessment tool in order to provide additional measures of success for the project. We also needed article feedback data in order to measure quality change over time on an ongoing basis for other quality-related initiatives. The tool is in production use on a few thousand articles and we're still analyzing the data we're getting before making a final decision on wider deployment. See http://www.mediawiki.org/wiki/Article_feedback/Public_Policy_Pilot/Early_Dat... for our findings to-date.
- MediaWiki 1.17. One of the side-effects of focusing on usability
for so long had been that MediaWiki core code review was neglected and backlogged, much to the dissatisfaction of the volunteer developer community. A lot of joint effort was put into clearing the code review backlog to ensure that we could push out a new MediaWiki release, which happened in February. Balancing strategic projects with code review and integration for volunteer-developed code (which in some cases can be quite complex and labor-intensive) is still very much a work-in-progress.
Nimish specifically also spent a lot of his time helping to support the development and piloting of OpenWebAnalytics as a potential analytics framework to gather better real-time data about what's happening in Wikimedia projects, precisely so we can better measure the effects of the interventions we're making.
The going-forward product development priorities of WMF (not including analytics work) are explained in more detail in the product whitepaper. http://strategy.wikimedia.org/wiki/Product_Whitepaper
Mind you, I'm not at all satisfied with the rate of our progress, but that's generally not because "we're not making X or Y high enough of a priority" or "we suck and we don't know what we're doing", but because we simply don't have enough engineers to do all the development work that it takes to really support a huge and important thing like Wikimedia well. We're continuing to hire engineers in SF and contractors around the world, and we're actively looking into an additional engineering presence in a lower-salary region as part of the 11-12 budgeting process that's currently underway.
why are you so convinced that paying programmers will give a better result, than, let me take an example we all now quite well, paying content editors? could the worrying editor trend be stopped by hiring editors in lower salary regions?
or, to put it in other words, did you ever make a "mediawiki programmer trends study" ? maybe there is something wrong with the software, the process, the usability feedback, or the engineers working on it, that the progress is not satisfying?
rupert.
2011/4/4 Rodan Bury bury.rodan@gmail.com:
As for the quantitative analysis, the one made during the beta testing of Vector was detailed. It clearly showed that most users - and especially newbies - preferred Vector over Monobook (retention rates of 70 - 80 % and more).
It means that for most people Vector wasn't worse than Monobook; it doesn't necessarily mean that it was significantly better. Putting a lot of money and volunteer effort into a Big Project is supposed to create something *better* than the current thing.
That's the problem with grants, i guess. If a rich - and certainly well-meaning - foundation invests money in a Big Project that doesn't hurt free knowledge, but doesn't advance it too much either, it's not a big problem by itself. But it becomes a problem when it has a hidden cost in the form of work that volunteers in all the wikis have to do to adjust their home sites to that Big Project. If this work has real results, such as making the articles significantly easier to edit or bringing a lot of new editors, then this is perfectly justified. But if it's just a nice new skin that doesn't really change anything significant, then it's kinda frustrating.
-- Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי http://aharoni.wordpress.com "We're living in pieces, I want to live in peace." - T. Moore
Le 05/04/2011 07:41, Amir E. Aharoni a écrit :
That's the problem with grants, i guess. If a rich - and certainly well-meaning - foundation invests money in a Big Project that doesn't hurt free knowledge, but doesn't advance it too much either, it's not a big problem by itself.
Indeed! Vector has opponents. But which arguments have they ever given? - The opportunity cost? - What? - The opportunity cost. [1] - Oh. Yeah, yeah. They did give us that. Uh, that's true. Yeah. - And the community cost. - Oh, yeah, the community cost. Remember all the toil the new skin required? - Yeah. All right. I'll grant you the opportunity cost and the community cost are two arguments that the opponents have given. - And the monetary cost. - Well, yeah. Obviously the monetary cost. I mean, the money go without saying, doesn't it? But apart from the opportunity, the community , and the monetary costs-- - Useless changes. - Unneeded changes. - Huh? Heh? Huh... - Imposed changes. - Ohh... - Yeah, yeah. All right. Fair enough. - And the controversy. - Oh, yes. Yeah... - Yeah. Yeah, that's something we'd really appreciate, if there were consensus before changes. Huh. - All right, but apart from the opportunity cost, the community cost, the monetary cost, useless, unneeded and imposed changes, and the controversy, what have the community ever had to say to us? - They built Wikipedia. - Oh. Wikipedia? Shut up!
2011/4/4 Amir E. Aharoni amir.aharoni@mail.huji.ac.il:
For example, in the Hebrew Wikipedia there was a Search and Replace gadget long before the advent of Vector's Search and Replace dialog. It was developed due to popular demand, bottom-up, by a volunteer, and
- here's the scariest part - without any grants. It is still used in
the Hebrew Wikipedia, probably much more often than the Vector thingy, which is still rather useless due to bugs such as 20919 and 22801.
As lovely as bottom-up gadget development is, it also highlights the complexity of our challenge in improving usability: By allowing every community to independently develop improvements to things like the toolbar, we're very much creating a risk of degrading usability over time. After all, if you're complaining about the lack of data and formal testing supporting Vector, what justification is there for the vast majority of user-contributed JS changes, which in many cases have terrible UIs and have no formal or informal user testing or supporting data?
And honestly, Hebrew Wikipedia is a great example of this. Just a year after Vector, the standard edit page that even logged out users see has a whole new row of icons in the "Advanced" section of the toolbar, including some very non-intuitive or just plain ugly design choices which are inconsistent with any of the existing icons. Is there any supporting data for the choices that were made as to what was added to the toolbar?
Of course the answer isn't to prevent gadget development, but I do think we need (as Brion highlighted in the wikitech-l thread) much better support systems, consistently enforced style guides, etc. In addition to better analytics systems, that _should_ ultimately include access to WMF design and user testing resources to validate gadget changes, better standard code and icon libraries that gadgets can use, etc.
In nothing more then unscientific 'hand my laptop over to a friend, wait, switch themes, wait, ask opinions', repeated with 11 guinea pigs (i mean friends), it came out a wash. After 15 minutes in each theme, it was close to a split. 7 said they preferred monobook, 4 vector. When asked to compare visual styles and what worked, the only repeated answer was that monobook seemed more authoritative (and one 'reminded me of a textbook', which was explained to mean largely the same thing).
-Brock
On Mon, Apr 4, 2011 at 12:49 PM, Erik Moeller erik@wikimedia.org wrote:
2011/4/4 Amir E. Aharoni amir.aharoni@mail.huji.ac.il:
For example, in the Hebrew Wikipedia there was a Search and Replace gadget long before the advent of Vector's Search and Replace dialog. It was developed due to popular demand, bottom-up, by a volunteer, and
- here's the scariest part - without any grants. It is still used in
the Hebrew Wikipedia, probably much more often than the Vector thingy, which is still rather useless due to bugs such as 20919 and 22801.
As lovely as bottom-up gadget development is, it also highlights the complexity of our challenge in improving usability: By allowing every community to independently develop improvements to things like the toolbar, we're very much creating a risk of degrading usability over time. After all, if you're complaining about the lack of data and formal testing supporting Vector, what justification is there for the vast majority of user-contributed JS changes, which in many cases have terrible UIs and have no formal or informal user testing or supporting data?
And honestly, Hebrew Wikipedia is a great example of this. Just a year after Vector, the standard edit page that even logged out users see has a whole new row of icons in the "Advanced" section of the toolbar, including some very non-intuitive or just plain ugly design choices which are inconsistent with any of the existing icons. Is there any supporting data for the choices that were made as to what was added to the toolbar?
Of course the answer isn't to prevent gadget development, but I do think we need (as Brion highlighted in the wikitech-l thread) much better support systems, consistently enforced style guides, etc. In addition to better analytics systems, that _should_ ultimately include access to WMF design and user testing resources to validate gadget changes, better standard code and icon libraries that gadgets can use, etc.
-- Erik Möller Deputy Director, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
wikimedia-l@lists.wikimedia.org