2011/4/4 Amir E. Aharoni amir.aharoni@mail.huji.ac.il
2011/4/4 David Gerard dgerard@gmail.com
On 4 April 2011 16:20, Amir E. Aharoni amir.aharoni@mail.huji.ac.il
wrote:
I understand that WMF's resources are limited, but the development and the deployment of Vector did cost some money and also forced a lot of volunteers in English and in all other language projects to make adjustments to their sites. Measuring volunteer effort is harder to measure than money, but it's certainly not negligible.
If this is a valid argument - that technical changes should not be made if it would make work for other volunteers - then God forbid development continue on MediaWiki.
Of course every change makes volunteers work and it's perfectly understandable. The problem is that sometimes it is justified and sometimes it is not. As nifty as Vector, SimpleSearch and the new toolbar are, i have doubts about their contributions to Wikimedia's mission. But again, i might be wrong, and that's why i am asking what measurements were made.
See the current thread on wikitech-l about how chronically broken most site JavaScript is and what to do about the problem, given that freezing MediaWiki in perpetuity is really just not going to happen.
... I am following it closely. It is, in fact, strongly related to this topic: Polishing and modernizing gadgets developed by volunteer JS gurus in local projects and exporting them to other projects and languages is a much better investment of time and money, simply because it is quite certain that these gadgets were created to answer real needs of real editors, whereas Vector grew out of very small usability studies.
For example, in the Hebrew Wikipedia there was a Search and Replace gadget long before the advent of Vector's Search and Replace dialog. It was developed due to popular demand, bottom-up, by a volunteer, and
- here's the scariest part - without any grants. It is still used in
the Hebrew Wikipedia, probably much more often than the Vector thingy, which is still rather useless due to bugs such as 20919 and 22801.
-- Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי http://aharoni.wordpress.com "We're living in pieces, I want to live in peace." - T. Moore
As Erik Möller said the qualitative analysis is the user testing with a few dozens of users. This user testing was conducted several times during the development cycle, and it was thorough. The best user testing consist of no more than 30 users, and I can tell the user testing conducted by the Usability Team is high quality and standard.
As for the quantitative analysis, the one made during the beta testing of Vector was detailed. It clearly showed that most users - and especially newbies - preferred Vector over Monobook (retention rates of 70 - 80 % and more).
Now, the Usability Initiative endend in April 2010, soon after the deployment of Vector to all Wikimedia Wikis. The Wikimedia Foundation did not place usability as one of their main priorities, and that was a mistake on my opinion. And so no quantitative analysis after the deployment of Vector was made. Several projects of the Usability Initiative became frozen. Since no more information is given on this topic, it now raises doubts in the public's minds about the quality of the Usability Initiative itself. You're certainly not the only one thinking this way. This is not the fault of the Usability Initiative however, they have done a good job the whole time. The Usability Initiative was timed to end abruptly because it was funded by a grant, and it's the Wikimedia Foundation's fault for not carrying on the Usability Initiative.
As for the bugs you point out (20919https://bugzilla.wikimedia.org/show_bug.cgi?id=20919and 22801 https://bugzilla.wikimedia.org/show_bug.cgi?id=22801) they are now the responsibility of MediaWiki developers to be fixed. And we all know that the Wikimedia Foundation doesn't employ nearly enough developers to fix such bugs. The amount of bug reports that needs to be addressed just keeps piling up. It's a shame.
I hope it clarifies the situation. Cheers, Rodan Bury