Here is a first stab for a draft proposal to organize our volunteer testing activities:
http://www.mediawiki.org/wiki/Talk:QA/Strategy#Manual_testing_strategy
Written after some lousy discussions with Chris and Sumana, and reading a bunch of related wiki pages. Your feedback is welcome.
Ideally this _theory_ will be immediately applicable to some pilots that we can run in the upcoming weeks. The Language and Mobile teams seem to be ready for a try - maybe even before the end of the year. Visual Editor and Editor Engagement teams might come next in January.
The door is open for any other project willing to run QA activities with volunteers. Just let me know.
This is a great outline. I am looking forward to contributing to the areas where I have some experience and expertise, and learning about the areas where Quim does and I don't! -Chris
On Wed, Nov 21, 2012 at 4:45 PM, Quim Gil qgil@wikimedia.org wrote:
Here is a first stab for a draft proposal to organize our volunteer testing activities:
http://www.mediawiki.org/wiki/**Talk:QA/Strategy#Manual_**testing_strategyhttp://www.mediawiki.org/wiki/Talk:QA/Strategy#Manual_testing_strategy
Written after some lousy discussions with Chris and Sumana, and reading a bunch of related wiki pages. Your feedback is welcome.
Ideally this _theory_ will be immediately applicable to some pilots that we can run in the upcoming weeks. The Language and Mobile teams seem to be ready for a try - maybe even before the end of the year. Visual Editor and Editor Engagement teams might come next in January.
The door is open for any other project willing to run QA activities with volunteers. Just let me know.
-- Quim Gil Technical Contributor Coordinator Wikimedia Foundation
______________________________**_________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/**mailman/listinfo/wikitech-lhttps://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Wed, Nov 21, 2012 at 3:45 PM, Quim Gil qgil@wikimedia.org wrote:
Here is a first stab for a draft proposal to organize our volunteer testing activities:
http://www.mediawiki.org/wiki/**Talk:QA/Strategy#Manual_**testing_strategyhttp://www.mediawiki.org/wiki/Talk:QA/Strategy#Manual_testing_strategy
Written after some lousy discussions with Chris and Sumana, and reading a bunch of related wiki pages. Your feedback is welcome.
Ideally this _theory_ will be immediately applicable to some pilots that we can run in the upcoming weeks. The Language and Mobile teams seem to be ready for a try - maybe even before the end of the year. Visual Editor and Editor Engagement teams might come next in January.
This proposal feels detached from reality. Right now features teams mostly do one of the following, in my experience:
1). Product managers and developers do their own manual QA. For PMs this aligns with verifying requirements, for developers it's checking their own work. It can be a pain in the ass but it works for the most part. 2). A lucky few teams have dedicated QA help, like mobile.
In either situation, manual QA tends to be done on a tight deadline, requires an intimiate understanding of the goals and requirements, and within a very specific scope.
I don't have a lot of experience working at a large open source project, as a caveat, so I haven't had the opportunity to see volunteer QA in action. But considering my current working situation, I would rather continue doing my own QA rather than rely on a volunteer who cannot be held to a deadline and is not personally responsible for the quality of our work. The only solutions in my mind are A) much more robust automated testing. B) hiring experienced QA people. Anything else is just going to slow us down.
Steven
This proposal feels detached from reality. Right now features teams mostly do one of the following, in my experience:
1). Product managers and developers do their own manual QA. For PMs this aligns with verifying requirements, for developers it's checking their own work. It can be a pain in the ass but it works for the most part. 2). A lucky few teams have dedicated QA help, like mobile.
I've mentioned this before but I've been pretty quiet about it. QA at WMF is still a pretty new idea, and we're still getting a lot of bits sorted, but if your project has a need for software testing/QA, I am always willing to help, and Zeljko and Michelle are also expert on the subject.
In either situation, manual QA tends to be done on a tight deadline,
requires an intimiate understanding of the goals and requirements, and within a very specific scope.
Community QA is less about deadlines and more about organizing around windows of opportunity. My best example is the testing session that we ran for AFTv5 just before the first limited release to production. A nice mix of Wikipedians and outside software testers provided well-considered testing, and we changed AFTv5 in significant ways before the release as a result of that feedback.
I don't have a lot of experience working at a large open source project, a caveat, so I haven't had the opportunity to see volunteer QA in action. But considering my current working situation, I would rather continue doing my own QA rather than rely on a volunteer who cannot be held to a deadline and is not personally responsible for the quality of our work. The only solutions in my mind are A) much more robust automated testing. B) hiring experienced QA people. Anything else is just going to slow us down.
Automated testing is hugely important, and it has been my focus in recent times, I'll be making some announcements about that very soon. On the community testing side, it is quite possible to have those who understand the requirements and desired behavior create guided test "charters" for those who are not necessarily intimately aware of the project goals.
One of the biggest impediments we have to community testing (or even user testing by insiders) is the lack of reasonable test environments. The "test" and "test2" environments are not only misnamed, but are of marginal utility. We've been investing in beta labs, and beta is so much better than it used to be, but we still have a way to go there. As I mentioned on this list before, the best way to improve beta labs at this point is to use it.
WMF has a small but dedicated QA staff. My idea of the role of QA/testing is that QA/testing is a service we may provide to particular projects. Some projects may not need QA/testing. Some projects may need it from time to time, but not always. Some projects may need community testing, and we can support that also.
What I do not want to see is for QA/testing to be some sort of mandatory gateway/hand-off/quality-police function that everything must pass through. That way lies madness. -Chris
On Wed, Nov 21, 2012 at 5:03 PM, Chris McMahon cmcmahon@wikimedia.orgwrote:
Community QA is less about deadlines and more about organizing around windows of opportunity. My best example is the testing session that we ran for AFTv5 just before the first limited release to production. A nice mix of Wikipedians and outside software testers provided well-considered testing, and we changed AFTv5 in significant ways before the release as a result of that feedback.
Thanks for the rest your response Chris, especially regarding how you view the role of QA.
We should probably talk off-list, because the example of AFTv5 is a red flag to me that says volunteer QA would likely not be useful to my team. It is the kind of project perhaps most dissimilar from how E3 operates. But it sounds like much of my objections are E3-specific here, rather than applicable to all use cases.
Thanks,
Steven
On 11/21/2012 04:30 PM, Steven Walling wrote:
On Wed, Nov 21, 2012 at 3:45 PM, Quim Gil qgil@wikimedia.org wrote:
Here is a first stab for a draft proposal to organize our volunteer testing activities:
http://www.mediawiki.org/wiki/**Talk:QA/Strategy#Manual_**testing_strategyhttp://www.mediawiki.org/wiki/Talk:QA/Strategy#Manual_testing_strategy
Written after some lousy discussions with Chris and Sumana, and reading a bunch of related wiki pages. Your feedback is welcome.
Ideally this _theory_ will be immediately applicable to some pilots that we can run in the upcoming weeks. The Language and Mobile teams seem to be ready for a try - maybe even before the end of the year. Visual Editor and Editor Engagement teams might come next in January.
This proposal feels detached from reality.
If we want to change the current reality we need a bit of detachment, isn't it. :) I believe the proposal is realistic, though. At least is feasible to get us started and then fine tune after each real pilot.
Right now features teams mostly do one of the following, in my experience:
1). Product managers and developers do their own manual QA. For PMs this aligns with verifying requirements, for developers it's checking their own work. It can be a pain in the ass but it works for the most part. 2). A lucky few teams have dedicated QA help, like mobile.
In either situation, manual QA tends to be done on a tight deadline, requires an intimiate understanding of the goals and requirements, and within a very specific scope.
All this is very true for a bunch of critical missions measured against a limited set of pre-defined criteria. You can still get some very specific and focused help from volunteers for this type of work (as Chris has pointed out), but I agree with you that is not easy and might lead to extra work for everybody.
However, there is a lot more to test and in many cases good collaboration with volunteers will provide results that no busy professional team can realistically produce alone. We can call this "the long tail".
"The long tail" are all those areas that ideally your professional team would address but that get always postponed or declared out of scope. Even areas that are initially planned to be addressed but then they get pushed sprint after sprint. Where you can't reach, extra help will help and hardly will do any harm.
Another area where volunteers can be a lot more useful than a product manager or a professional tester is "user perceived quality". Because this something that only people not involved in project can assess. You think something is clear and then five outsiders can't go through it. You believe a problem is minor and then 3 out of 10 real users get systematically stuck there. You see the point.
I don't have a lot of experience working at a large open source project, as a caveat, so I haven't had the opportunity to see volunteer QA in action. But considering my current working situation, I would rather continue doing my own QA rather than rely on a volunteer who cannot be held to a deadline and is not personally responsible for the quality of our work. The only solutions in my mind are A) much more robust automated testing. B) hiring experienced QA people. Anything else is just going to slow us down.
QA volunteers are no substitute for your homework. But don't forget that QA volunteers (just like any volunteer profiles in Wikimedia, including editors) can be newbies and amateurs, but can also be experts in their area. Professionals like you, willing to contribute some time and skills.
We can have a lunch and discuss about these things, but I know the best way to convince you and anybody is with results. This is why the proposal includes a section about measuring success. :)
The Language and Mobile teams seem to be looking forward to run the first pilots. Let's define something sensible and useful, let's do it and let's measure the results.
-- Quim
On Thu, Nov 22, 2012 at 5:15 AM, Quim Gil qgil@wikimedia.org wrote:
Here is a first stab for a draft proposal to organize our volunteer testing activities:
http://www.mediawiki.org/wiki/Talk:QA/Strategy#Manual_testing_strategy
Written after some lousy discussions with Chris and Sumana, and reading a bunch of related wiki pages. Your feedback is welcome.
QA activity days often lead to a large number of duplicate issues being filed. The bugmaster (I'm assuming this is Andre) should probably have a say in whether there is capacity built in to handle de-duplication either manually or, automatically.
The other thought I had was about the layered personas being created for the team. Since Chris points out later in this thread that QA is a relatively "new concept", being egalitarian would have a much higher chance at a larger percentage of committed members of a QA group.
QA activity days often lead to a large number of duplicate issues being filed.
This is true. But I think there is value when new users (for some value of "new") file duplicate issues. In particular, I think it points up a possible need to increase the severity/priority of the issues reported.
The other thought I had was about the layered personas being created
for the team. Since Chris points out later in this thread that QA is a relatively "new concept", being egalitarian would have a much higher chance at a larger percentage of committed members of a QA group.
Thank you. Again, I think this comes down to testing activities or "charters" being designed well, in advance, by those who have some knowledge of the project being tested, for the benefit of those who have less knowledge. The level of expertise from project to project for any particular person will change radically over the course of multiple test exercises. "Egalitarian" is a good word.
On 11/21/2012 05:18 PM, sankarshan wrote:
QA activity days often lead to a large number of duplicate issues being filed. The bugmaster (I'm assuming this is Andre) should probably have a say in whether there is capacity built in to handle de-duplication either manually or, automatically.
It's not coincidence that the proposal puts bug triaging activities at the same level than testing activities. Volunteers can help, and a lot, dealing with the long tail of Bugzilla reports that fully dedicated teams have no time to go through.
The other thought I had was about the layered personas being created for the team. Since Chris points out later in this thread that QA is a relatively "new concept", being egalitarian would have a much higher chance at a larger percentage of committed members of a QA group.
Sorry, I don't understand this paragraph.
Just in case: in the proposal there are some profiles and roles identified, with the simple purpose of agreeing what kind of people we are looking for and how do we expect them (us) to interact and collaborate with each other beyond the pure tasks of testing or bug triaging.
Hi, thank you for all your feedback. I have moved the draft to
https://www.mediawiki.org/wiki/QA/Strategy
And I have integrated the main point of the discussion here: manual testing activities work better in some conditions than others, and bug triaging is needed after them.
I have also integrated the sauce of the pre-existing notes in that page, which drove me to create a couple of sections:
https://www.mediawiki.org/wiki/QA/Strategy#Follow-up_activities https://www.mediawiki.org/wiki/QA/Strategy#Community_incentives
Feel free to keep commenting and polishing. I will still be integrating new feedback, but since Chris is happy about the plan I have moved it to the DONE section of my task list.
https://www.mediawiki.org/wiki/User:Qgil#Done
PS: feedback and help are always welcome on my current and waiting tasks. In fact this manual testing plan has made my waiting list grow noticeably...... :)
wikitech-l@lists.wikimedia.org