On 11/21/2012 04:30 PM, Steven Walling wrote:
On Wed, Nov 21, 2012 at 3:45 PM, Quim Gil
<qgil(a)wikimedia.org> wrote:
Here is a first stab for a draft proposal to
organize our volunteer
testing activities:
http://www.mediawiki.org/wiki/**Talk:QA/Strategy#Manual_**testing_strategy&…
Written after some lousy discussions with Chris and Sumana, and reading a
bunch of related wiki pages. Your feedback is welcome.
Ideally this _theory_ will be immediately applicable to some pilots that
we can run in the upcoming weeks. The Language and Mobile teams seem to be
ready for a try - maybe even before the end of the year. Visual Editor and
Editor Engagement teams might come next in January.
This proposal feels detached from reality.
If we want to change the current reality we need a bit of detachment,
isn't it. :) I believe the proposal is realistic, though. At least is
feasible to get us started and then fine tune after each real pilot.
Right now features teams mostly
do one of the following, in my experience:
1). Product managers and developers do their own manual QA. For PMs this
aligns with verifying requirements, for developers it's checking their own
work. It can be a pain in the ass but it works for the most part.
2). A lucky few teams have dedicated QA help, like mobile.
In either situation, manual QA tends to be done on a tight deadline,
requires an intimiate understanding of the goals and requirements, and
within a very specific scope.
All this is very true for a bunch of critical missions measured against
a limited set of pre-defined criteria. You can still get some very
specific and focused help from volunteers for this type of work (as
Chris has pointed out), but I agree with you that is not easy and might
lead to extra work for everybody.
However, there is a lot more to test and in many cases good
collaboration with volunteers will provide results that no busy
professional team can realistically produce alone. We can call this "the
long tail".
"The long tail" are all those areas that ideally your professional team
would address but that get always postponed or declared out of scope.
Even areas that are initially planned to be addressed but then they get
pushed sprint after sprint. Where you can't reach, extra help will help
and hardly will do any harm.
Another area where volunteers can be a lot more useful than a product
manager or a professional tester is "user perceived quality". Because
this something that only people not involved in project can assess. You
think something is clear and then five outsiders can't go through it.
You believe a problem is minor and then 3 out of 10 real users get
systematically stuck there. You see the point.
I don't have a lot of experience working at a
large open source project, as
a caveat, so I haven't had the opportunity to see volunteer QA in action.
But considering my current working situation, I would rather continue doing
my own QA rather than rely on a volunteer who cannot be held to a deadline
and is not personally responsible for the quality of our work. The only
solutions in my mind are A) much more robust automated testing. B) hiring
experienced QA people. Anything else is just going to slow us down.
QA volunteers are no substitute for your homework. But don't forget that
QA volunteers (just like any volunteer profiles in Wikimedia, including
editors) can be newbies and amateurs, but can also be experts in their
area. Professionals like you, willing to contribute some time and skills.
We can have a lunch and discuss about these things, but I know the best
way to convince you and anybody is with results. This is why the
proposal includes a section about measuring success. :)
The Language and Mobile teams seem to be looking forward to run the
first pilots. Let's define something sensible and useful, let's do it
and let's measure the results.
--
Quim