There are ongoing separate discussions about the best way to organize testing sprints and bug days. The more we talk and the more we delay the beginning of continuous activities the more I believe the solution is common for both:
Smaller activities and more frequent. Each one of them less ambitious but more precise. Not requiring by default the involvement of developer teams. Especially not requiring the involvement of WMF dev teams.
Of course we want to work together with development teams! But just not wait for them. They tend to be busy, willing and at the same time unwilling (a problem we need to solve but not necessarily before starting a routine of testing and bug management activities. If a dev team (WMF or not) wants to have dedicated testing and bug management activities we will give them the top priority.
Imagine this wheel:
Week 1: manual testing (Chris)
Week 2: fresh bugs (Andre)
Week 3: browser testing (Željko)
Week 4: rotten bugs (Valerie)
All the better if there is certain correlation between testing and bugs activities, but no problem if there is none.
From the point of view of the week coordinators this is how a cycle would look like:
Week 1: decide the goal of the next activity.
Weeks 2-3: preparing documentation, recruiting participants.
Week 4: DIY activities start. Support via IRC & mailing list. Group sprint on Wed/Thu DIY activities continue.
Week 4+1: Evaluation of results. Goal of the next activity....
During the group sprints there would be secondary DIY tasks for those happy to participate but not fond of the main goal of the week.
If one group needs more than one activity per month they can start overflowing the following week, resulting in simultaneous testing & bugs activities.
Compared to the current situation, this wheel looks powerful and at the same time relatively easy to set up. There will plenty of things to improve and fine tune, but probably none of them will require to stop the wheel.
What do you think?
Some examples to illustrate.
On 01/16/2013 02:25 PM, Quim Gil wrote:
Smaller activities and more frequent. Each one of them less ambitious but more precise. Not requiring by default the involvement of developer teams. Especially not requiring the involvement of WMF dev teams.
...
Imagine this wheel:
Week 1: manual testing (Chris)
If there are no priorities ripe for sprint at http://www.mediawiki.org/wiki/QA/Features_testing
then an idea could be to help commits waiting (and waiting) to be reviewed in Gerrit. Collaborating with the authors, we could test those fixes and features in fresh installs at Labs and bring first hand feedback to the related bug reports as a way to help reviewers.
We could even help testing projects at https://www.mediawiki.org/wiki/Review_queue
The organization of this week could be done by http://www.mediawiki.org/wiki/Groups/Proposals/Features_testing
Week 2: fresh bugs (Andre)
I don't think Andre will have problems finding tasks for this. But again, if the top priority, WMF lead projects are well covered then we can help and involve others e.g. interesting extensions.
Organized by https://www.mediawiki.org/wiki/Groups/Proposals/Bug_Squad
Week 3: browser testing (Željko)
As long as there is a backlog at http://www.mediawiki.org/wiki/QA/Browser_testing/Test_backlog it should be easy for Željko to decide what comes next. Having the backlog empty would be a nice problem to have, but if that happens I'm sure we will find areas to fill it up.
Organized by http://www.mediawiki.org/wiki/Groups/Proposals/Browser_testing
Week 4: rotten bugs (Valerie)
http://www.mediawiki.org/wiki/Community_metrics/December_2012#Stalled suggests that we won't have problems finding tasks any time soon...
Also organized by the Bug Squad.
On Wed, 2013-01-16 at 16:34 -0800, Quim Gil wrote:
Some examples to illustrate.
Week 2: fresh bugs (Andre)
I don't think Andre will have problems finding tasks for this. But again, if the top priority, WMF lead projects are well covered then we can help and involve others e.g. interesting extensions.
Please help me to better understand this. Do you maybe have in mind something like "Reports filed in the last four weeks that have not seen a response yet"? That would make sense.
But really "fresh" bugs need to be triaged on a ~daily basis anyway.
andre
On 01/25/2013 03:27 AM, Andre Klapper wrote:
On Wed, 2013-01-16 at 16:34 -0800, Quim Gil wrote:
Some examples to illustrate.
Week 2: fresh bugs (Andre)
I don't think Andre will have problems finding tasks for this. But again, if the top priority, WMF lead projects are well covered then we can help and involve others e.g. interesting extensions.
Please help me to better understand this. Do you maybe have in mind something like "Reports filed in the last four weeks that have not seen a response yet"? That would make sense.
These weekly goals are well defined and measurable activities focusing on reaching out to, engaging and hopefully retaining new contributors.
Semi-fictional examples for inspiration:
* There is plenty of feedback related to the Mobile Beta in Bugzilla and a dozen Wikipedia community portals. Let's make sure that the legitimate reports are all filed in Bugzilla and properly prioritized.
* The last major release of the Wikipedia Mobile app got a buch of feedback at the App Store, Google Play and some community portals. Let's go through all that and let's try to engage the best people offering feedback so we can involve them in future betas and explain how to report a bug.
* The Wikivoyage / Wikidata launches generated plenty of feedback spread all over (including enhancement requests) with a lot of potential for triaging and educating reporters. Let's do it together with the voyage and data lovers!
* The last MediaWiki version has been released and we got a wave of feedback. Let's go through it (including enhancement requests).
* Let's look at the last 3 months of enhancement requests and see what really needs more eyeballs and a push to actual implementation, especially if the person proposing is offering help doing the actual work.
...
But really "fresh" bugs need to be triaged on a ~daily basis anyway.
Of course, but this is no different for the rest of QA areas.
At the end each area should have at least
* Priorities where you are working on a regular basis. * DIY activities any volunteer can take by themselves anytime. * A monthly activity = weekly goal.
On Wed, Jan 16, 2013 at 3:25 PM, Quim Gil qgil@wikimedia.org wrote:
All the better if there is certain correlation between testing and bugs activities, but no problem if there is none.
I'm glad you mentioned this, it's something I'd like to bring up with
Andre and Valerie. Note that much of the backlog for automated tests is the result of fixed BZ tickets http://www.mediawiki.org/wiki/Qa/test_backlog. Fixed bugs are great candidates for regression tests because a) what broke once is more likely to break again and b) an issue fixed may indicate more issues in nearby areas of the feature. Our UploadWizard test is a great example of a single test catching multiple issues in the same area over time.
So a mechanism by which fixed browser bugs become entered in the automated browser test backlog would be a fine thing.
Compared to the current situation, this wheel looks powerful and at the
same time relatively easy to set up. There will plenty of things to improve and fine tune, but probably none of them will require to stop the wheel.
What do you think?
How would this affect the notion of Groups? http://www.mediawiki.org/wiki/Groups/Proposals
On 01/17/2013 09:54 AM, Chris McMahon wrote:
How would this affect the notion of Groups? http://www.mediawiki.org/wiki/Groups/Proposals
In a positive way. ;)
If after every week sprint a group gets one contributor more, then the chances of having a better next sprint will increase.
Following the proposal, we don't need formal MediaWiki groups start turning the wheel. With Chris, Andre, Zeljko and Valerie we have enough to push the cycles.
Of course each activity will need to have participants, and these participants might be interested in staying in touch, discuss and plan other tasks. This is where the groups might help to coordinate and generate better week sprints.
I haven't been around long enough, but one problem we seem to have is that even successful activities leave little heritage for future events. Apart from the people continuously engaged e.g. through this list, it's almost like we are starting from scratch every time. Hopefully the groups will help bridging activities and growing continuously.
I support this. It would give me time to follow up with assignees after a bug day before the next bug day.
On Thu, Jan 17, 2013 at 12:07 PM, Quim Gil qgil@wikimedia.org wrote:
On 01/17/2013 09:54 AM, Chris McMahon wrote:
How would this affect the notion of Groups? http://www.mediawiki.org/wiki/**Groups/Proposalshttp://www.mediawiki.org/wiki/Groups/Proposals
In a positive way. ;)
If after every week sprint a group gets one contributor more, then the chances of having a better next sprint will increase.
Following the proposal, we don't need formal MediaWiki groups start turning the wheel. With Chris, Andre, Zeljko and Valerie we have enough to push the cycles.
Of course each activity will need to have participants, and these participants might be interested in staying in touch, discuss and plan other tasks. This is where the groups might help to coordinate and generate better week sprints.
I haven't been around long enough, but one problem we seem to have is that even successful activities leave little heritage for future events. Apart from the people continuously engaged e.g. through this list, it's almost like we are starting from scratch every time. Hopefully the groups will help bridging activities and growing continuously.
-- Quim Gil Technical Contributor Coordinator @ Wikimedia Foundation http://www.mediawiki.org/wiki/**User:Qgilhttp://www.mediawiki.org/wiki/User:Qgil
______________________________**_________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/**mailman/listinfo/wikitech-lhttps://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 01/17/2013 02:10 PM, Valerie Juarez wrote:
I support this. It would give me time to follow up with assignees after a bug day before the next bug day.
Since the Features testing week seems to be moving to Jan 28, would you like to step in and start next week?
You are already planning a bug day around what I called Rotten bugs (are you ok with the name?) :) It doesn't need to be the Big Bug Day. We can start agreeing the goals... tomorrow? Then work on whatever preparations and outreach are needed, encourage some DIY tasks, be available on IRC and organize the Day for... Thursday 24?
Andre, myself and hopefully others can help.
On Thu, 2013-01-17 at 16:37 -0800, Quim Gil wrote:
You are already planning a bug day around what I called Rotten bugs (are you ok with the name?) :) It doesn't need to be the Big Bug Day. We can start agreeing the goals... tomorrow? Then work on whatever preparations and outreach are needed, encourage some DIY tasks, be available on IRC and organize the Day for... Thursday 24?
As discussed with Valerie a few days ago, I've now announced a first bugday for Tuesday 29th (see [1]) on "Rotten Bugs". I didn't consider next week a good timing as we might be busy enough tracking down problems with the datacenter move. :)
andre
[1] http://lists.wikimedia.org/pipermail/wikitech-l/2013-January/065792.html
On 01/18/2013 06:37 AM, Andre Klapper wrote:
On Thu, 2013-01-17 at 16:37 -0800, Quim Gil wrote:
You are already planning a bug day around what I called Rotten bugs (are you ok with the name?) :) It doesn't need to be the Big Bug Day. We can start agreeing the goals... tomorrow? Then work on whatever preparations and outreach are needed, encourage some DIY tasks, be available on IRC and organize the Day for... Thursday 24?
As discussed with Valerie a few days ago, I've now announced a first bugday for Tuesday 29th (see [1]) on "Rotten Bugs".
Thank you for stepping in!
I didn't consider next week a good timing as we might be busy enough tracking down problems with the datacenter move. :)
Sure, but that Bug Day is precisely for the big majority of us that will have little to do dealing with critical bugs related with the datacenter move.
In the worst case, if nothing works and hostile aliens do take over, we can just apologize and leave it for another day. The preparations done will be useful in any case, and the wheel will benefit from the initial push.
On Thu, 2013-01-17 at 10:54 -0700, Chris McMahon wrote:
I'm glad you mentioned this, it's something I'd like to bring up with
Andre and Valerie. Note that much of the backlog for automated tests is the result of fixed BZ tickets http://www.mediawiki.org/wiki/Qa/test_backlog. Fixed bugs are great candidates for regression tests because a) what broke once is more likely to break again and b) an issue fixed may indicate more issues in nearby areas of the feature. Our UploadWizard test is a great example of a single test catching multiple issues in the same area over time.
How do items currently end up on http://www.mediawiki.org/wiki/QA/Browser_testing/Test_backlog#Backlog ? Who thinks "This is a candidate for an automated browser test, I should list it on the wikipage"? QA reading the commit backlogs? Developers who fixed the issue and know about automated tests? Knowing the "target audience" might help finding the best workflow.
So a mechanism by which fixed browser bugs become entered in the automated browser test backlog would be a fine thing.
1) Bugzilla already has a number of keywords such as * need-integration-test (Selenium test should be written for this.) * need-parsertest (bug needs a parsertest written for it.) * need-unittest (bug needs a test written for it.)
We could introduce yet another keyword in Bugzilla to mark reports that could benefit from a regression-test. Then somebody (who?) could set the keyword and QA could query that bug list [1↓]. However, I don't know how actively these three keywords are used. Even more I wonder how many different workflow interpretations exist. "Developer closes bug report as RESOLVED FIXED and adds keyword" followed by "some volunteer finds out how to search for *closed* tickets with these keywords, writes a test, and removes the keyword again"? I'd love to find out to better understand if it's useful.
2) Another option inside of Bugzilla would be creating a dedicated "Automated browser tests to be created" component and filing (or cloning) a ticket under it everytime when a bug got FIXED. Again, who would be expected to create that ticket - QA going through recently fixed reports? Developers who fixed it? Triagers?
I'm not totally convinced by these two yet, as both need buy-in (developers and reporters need to know and remember that QA works on automated browser tests, and should mark issues accordingly), but nothing better came to my mind in the last days. :-/
andre
[1↑] or display the buglist embedded in a wiki page, by using something like http://www.mediawiki.org/wiki/Extension:Bugzilla_Reports
On Thu, Jan 24, 2013 at 6:30 PM, Andre Klapper aklapper@wikimedia.orgwrote:
How do items currently end up on http://www.mediawiki.org/wiki/QA/Browser_testing/Test_backlog#Backlog ? Who thinks "This is a candidate for an automated browser test, I should list it on the wikipage"? QA reading the commit backlogs?
Yes. Page history says vast majority of the edits were made my Chris and me.
Željko
Compared to the current situation, this wheel looks powerful and at the same time relatively easy to set up. There will plenty of things to improve and fine tune, but probably none of them will require to stop the wheel.
What do you think?
Our object here is to foster a community interested in participating in bug and testing projects. You've described one way we might create some projects, but I'd like to know more about your ideas for communicating with, creating, and supporting the communities for such projects. What makes the wheel valuable to such a community, and how do they know?
On 01/17/2013 02:14 PM, Chris McMahon wrote:
Compared to the current situation, this wheel looks powerful and at the same time relatively easy to set up. There will plenty of things to improve and fine tune, but probably none of them will require to stop the wheel.
What do you think?
Our object here is to foster a community interested in participating in bug and testing projects. You've described one way we might create some projects, but I'd like to know more about your ideas for communicating with, creating, and supporting the communities for such projects. What makes the wheel valuable to such a community, and how do they know?
We don't (and I don't aim to) have the perfect answers. What we now is that every iteration will be probably simpler and better than the previous one, because we will learn and acquire an inertia.
For instance, in my previous message about the Language features testing week I was proposing already specific tactics to reach out to potential participants in non-Latin-script Wikipedias. Once the goal for the following week is defined I'm sure we will have good ideas to reach the appropriate audience.
The wheel is basically a way for us to get started without more delays and then keep organizing sprints like a clock.
It is also a way for testing & bug management contributors to know what to expect. Every week there is something. Every month there is at least one activity of the specific flavor.
Maybe you know paella? http://en.wikipedia.org/wiki/Paella
There is one basic rule for a good paella:
"Paella doesn't await guests: guests await paella."
If we serve paella every week in a timely manner, people will come. If they enjoyed it they will repeat another week, bringing more guests.
On Thu, Jan 17, 2013 at 6:55 PM, Quim Gil qgil@wikimedia.org wrote:
If we serve paella every week in a timely manner, people will come. If they enjoyed it they will repeat another week, bringing more guests.
That is a good point. From what I've seen, these types of events also tend to attract hanger-ons who just happen to be idling in #mediawiki at the time. This brings more people into doing more mediawiki things, which is a good thing.
-bawolff
On 01/16/2013 02:25 PM, Quim Gil wrote:
Imagine this wheel:
Week 1: features testing (Chris)
Week 2: fresh bugs (Andre)
Week 3: browser testing (Željko)
Week 4: rotten bugs (Valerie)
I just had a chat with Siebrand from the Language Engineering Team. They like the idea and they have specific proposals for all the weeks:
http://etherpad.wikimedia.org/test-bug-i18n
They are ready to start. Next week.
So... why not? I will only look at the first week now (features testing). Their proposals are based on wiki pages that are pretty much ready for testers, even newcomers without much prior experience:
Week 1: manual testing (Chris) * https://www.mediawiki.org/wiki/Milkshake/Manual_testing -- can be tested for each language. Reports to bugzilla. * https://www.mediawiki.org/wiki/VisualEditor/Typing/General -- can be tested for every language. Reports to bugzilla. * https://www.mediawiki.org/wiki/VisualEditor/Typing/Right-to-left -- can be tested for Hebrew. Needs one tester. Reports to bugzilla. * https://www.mediawiki.org/wiki/VisualEditor/Typing/Indic -- can be tested for all Indic languages with some adaptations. Only Hindi at the moment.
VisualEditor looks like the primary goal, having Milkshake as secondary option for whoever feels more interested.
The testing is aimed primarily to people with an interest in Hindi and Hebrew. Other Indic and RTL languages welcome. And in general non-Latin scripts. We have a nice pool of potential testers in the Wikipedias of those languages. Through ambassadors and community portals (and central notice? too soon/fast?) we could reach whoever we decide to reach.
Of course we can do further outreach, but the Wikipedias alone should already provide the critical mass of contributors, right?
Then we need to define the right environment for testing. Is it a fresh install in Labs? Something else?
The wiki pages above already provide DIY testing cases. Together with https://www.mediawiki.org/wiki/How_to_report_a_bug we have the basics for the people willing to start contributing before the sprint.
The sprint could be on Thursday, starting in Asian friendly times since this is where most of the potential testers will be based. We need to define if there is going to be a specific activity during the sprint, or if it's only a certain time-frame where full support will be provided to testers by the Language Engineering team. For instance, we could open the sprint with a hangout-screencast where someone goes briefly through the tests described. All the better if the demoers are a native Hindi speaker, a native Hebrew speaker, etc. There is potential for screencasts and chat rooms in those languages as well...
I guess the goal would be to reach confidence in specific languages / scripts. If not confidence that it works well at least confidence that the issues are now reported as bugs.
The incentive could be priority for Wikipedia in-your-language to be part of the next VisualEditor deployment:
"Hi, we plan to deploy the next version of VisualEditor in your Wikipedia in two weeks, or as soon as we have the related documentation translated (link). The testing sprint some of the contributors of this Wikipedia made just gave us the confidence to include you in our Alpha deployment. Thank you everybody!"
I think we can do it (with some adrenaline - good).
Quim,
Thanks for this summary of your discussion on language features testing. Look forward to having community testing for Wikipedia-in-your-own-language Visual Editor and Milkshake :-)
Calling out to Indic and RTL community members - please test and report bugs. This really helps us on the Language Engineering team to have a fast turnaround on improving language features.
-Alolita
On Thu, Jan 17, 2013 at 2:21 PM, Quim Gil qgil@wikimedia.org wrote:
On 01/16/2013 02:25 PM, Quim Gil wrote:
Imagine this wheel:
Week 1: features testing (Chris)
Week 2: fresh bugs (Andre)
Week 3: browser testing (Željko)
Week 4: rotten bugs (Valerie)
I just had a chat with Siebrand from the Language Engineering Team. They like the idea and they have specific proposals for all the weeks:
http://etherpad.wikimedia.org/test-bug-i18n
They are ready to start. Next week.
So... why not? I will only look at the first week now (features testing). Their proposals are based on wiki pages that are pretty much ready for testers, even newcomers without much prior experience:
Week 1: manual testing (Chris)
- https://www.mediawiki.org/wiki/Milkshake/Manual_testing -- can be tested
for each language. Reports to bugzilla.
tested for every language. Reports to bugzilla.
tested for Hebrew. Needs one tester. Reports to bugzilla.
- https://www.mediawiki.org/wiki/VisualEditor/Typing/Indic -- can be tested
for all Indic languages with some adaptations. Only Hindi at the moment.
VisualEditor looks like the primary goal, having Milkshake as secondary option for whoever feels more interested.
The testing is aimed primarily to people with an interest in Hindi and Hebrew. Other Indic and RTL languages welcome. And in general non-Latin scripts. We have a nice pool of potential testers in the Wikipedias of those languages. Through ambassadors and community portals (and central notice? too soon/fast?) we could reach whoever we decide to reach.
Of course we can do further outreach, but the Wikipedias alone should already provide the critical mass of contributors, right?
Then we need to define the right environment for testing. Is it a fresh install in Labs? Something else?
The wiki pages above already provide DIY testing cases. Together with https://www.mediawiki.org/wiki/How_to_report_a_bug we have the basics for the people willing to start contributing before the sprint.
The sprint could be on Thursday, starting in Asian friendly times since this is where most of the potential testers will be based. We need to define if there is going to be a specific activity during the sprint, or if it's only a certain time-frame where full support will be provided to testers by the Language Engineering team. For instance, we could open the sprint with a hangout-screencast where someone goes briefly through the tests described. All the better if the demoers are a native Hindi speaker, a native Hebrew speaker, etc. There is potential for screencasts and chat rooms in those languages as well...
I guess the goal would be to reach confidence in specific languages / scripts. If not confidence that it works well at least confidence that the issues are now reported as bugs.
The incentive could be priority for Wikipedia in-your-language to be part of the next VisualEditor deployment:
"Hi, we plan to deploy the next version of VisualEditor in your Wikipedia in two weeks, or as soon as we have the related documentation translated (link). The testing sprint some of the contributors of this Wikipedia made just gave us the confidence to include you in our Alpha deployment. Thank you everybody!"
I think we can do it (with some adrenaline - good).
-- Quim Gil Technical Contributor Coordinator @ Wikimedia Foundation http://www.mediawiki.org/wiki/User:Qgil
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
They are ready to start. Next week.
Keep in mind that we're migrating data centers next week and all the Wikipedias will be subject to intermittent read-only access and possibly other issues. Hopefully we'll be stable by Thursday.
VisualEditor looks like the primary goal, having Milkshake as secondary option for whoever feels more interested.
To the best of my knowledge the only publicly accessible page for VE exists at http://www.mediawiki.org/wiki/VisualEditor:Test, and I believe one might have to have special rights to edit that page.
Of course we can do further outreach, but the Wikipedias alone should already provide the critical mass of contributors, right?
By what mechanism?
Then we need to define the right environment for testing. Is it a fresh install in Labs? Something else?
I would love to see VE widely enabled in beta labs. I suspect that is a non-trivial project.
The sprint could be on Thursday, starting in Asian friendly times since this is where most of the potential testers will be based.
Seems risky to me. Others might know differently.
-Chris
On 17/01/13 22:38, Chris McMahon wrote:
To the best of my knowledge the only publicly accessible page for VE exists at http://www.mediawiki.org/wiki/VisualEditor:Test, and I believe one might have to have special rights to edit that page.
Visualeditor is available at least on enwp to users who turn it on in their preferences, and probably on some others as well. Something about wider testing and explosions.
Well, probably not explosions, but better now while it's fairly contained than later.
On 01/17/2013 02:38 PM, Chris McMahon wrote:
They are ready to start. Next week.
Keep in mind that we're migrating data centers next week and all the Wikipedias will be subject to intermittent read-only access and possibly other issues. Hopefully we'll be stable by Thursday.
That is a good point. But even more important is to decide what is the testing environment.
VisualEditor looks like the primary goal, having Milkshake as secondary option for whoever feels more interested.
To the best of my knowledge the only publicly accessible page for VE exists at http://www.mediawiki.org/wiki/VisualEditor:Test, and I believe one might have to have special rights to edit that page.
marktraceur points to
http://ve-change-marking.instance-proxy.wmflabs.org/wiki/Main_Page
Currently runs an old version but he volunteers to update it whenever we want run the sprint.
Of course we can do further outreach, but the Wikipedias alone should already provide the critical mass of contributors, right?
By what mechanism?
- Coordinating with wikitech-ambassadors
- Reaching out to the communities of the languages we consider priority (equivalent villagePumps, mailing lists, maybe wise use of CentralNotice if the communities of those projects agree).
- Regular @MediaWiki news & social media channels, asking @Wikipedia to RT.
Then we need to define the right environment for testing. Is it a fresh install in Labs? Something else?
I would love to see VE widely enabled in beta labs. I suspect that is a non-trivial project.
Answered above.
The sprint could be on Thursday, starting in Asian friendly times since this is where most of the potential testers will be based.
Seems risky to me. Others might know differently.
If next week is too soon and the datacenter migration complicates things, then we shoulkd be able to do this the other week. I hope there wouldn't be any reason to delay further.
And that would fit the slot of Jan 30 that was left empty by Echo.
That is a good point. But even more important is to decide what is the testing environment.
Thanks Isarra, I hadn't known VE was an option on enwp now, nor did I know about http://ve-change-marking.**instance-proxy.wmflabs.org/**wiki/Main_Pagehttp://ve-change-marking.instance-proxy.wmflabs.org/wiki/Main_Page
If next week is too soon and the datacenter migration complicates things, then we should be able to do this the other week. I hope there wouldn't be any reason to delay further.
And that would fit the slot of Jan 30 that was left empty by Echo.
I think last week of January would be less risky, in terms of both stable access and also adequate preparation.
I actually make a pretty awesome paella, I'll give you the recipe if you like. One time though I served it to my in-laws and messed up the rice, and no one ever wanted to eat it again.
I hope I don't seem too negative. I have done a few of these test events, and doing them well is not as easy as it would seem. Two overarching concerns are:
* the participants should have fun * the results should be valuable
If the participants are faced with access issues, or confusing instructions, or spammy messages, or any of a host of other annoyances, they will not come back. Ever. Creating a fun experience takes a significant investment in planning, set up, and communication both before and during the exercise.
If the results are not valuable to the project being tested, then that is a waste of a significant investment. And again, if the participants feel like they've spent time in a wasted cause, they will not come back.
I've wanted to get a lot of eyeballs on VE for some time now, so let's figure out some details.
-Chris
On 01/17/2013 03:09 PM, Quim Gil wrote:
marktraceur points to
http://ve-change-marking.instance-proxy.wmflabs.org/wiki/Main_Page
Currently runs an old version but he volunteers to update it whenever we want run the sprint.
Note that this is an internal test wiki that we are using to test the VE / Parsoid interaction. It will go down occasionally and without warning, so don't rely on it for anything important.
Gabriel
On Fri, Jan 18, 2013 at 3:51 AM, Quim Gil qgil@wikimedia.org wrote:
- https://www.mediawiki.org/wiki/VisualEditor/Typing/Indic -- can be tested
for all Indic languages with some adaptations. Only Hindi at the moment.
A small suggestion would be to increase the keystroke combinations involved in the test in order to cover a greater number of valid conjuncts and also note ease of use for entry.
This proposal got a basic agreement and is being implemented at
https://www.mediawiki.org/wiki/QA/Weekly_goals
A rough start is expected in the first iteration of the four areas but we hope to have improvements every week.
Get involved!
Development teams: your proposals for testing & bug management weekly goals are welcome.
On 01/16/2013 02:25 PM, Quim Gil wrote:
There are ongoing separate discussions about the best way to organize testing sprints and bug days. The more we talk and the more we delay the beginning of continuous activities the more I believe the solution is common for both:
Smaller activities and more frequent. Each one of them less ambitious but more precise. Not requiring by default the involvement of developer teams. Especially not requiring the involvement of WMF dev teams.
Of course we want to work together with development teams! But just not wait for them. They tend to be busy, willing and at the same time unwilling (a problem we need to solve but not necessarily before starting a routine of testing and bug management activities. If a dev team (WMF or not) wants to have dedicated testing and bug management activities we will give them the top priority.
Imagine this wheel:
Week 1: manual testing (Chris)
Week 2: fresh bugs (Andre)
Week 3: browser testing (Željko)
Week 4: rotten bugs (Valerie)
All the better if there is certain correlation between testing and bugs activities, but no problem if there is none.
From the point of view of the week coordinators this is how a cycle would look like:
Week 1: decide the goal of the next activity.
Weeks 2-3: preparing documentation, recruiting participants.
Week 4: DIY activities start. Support via IRC & mailing list. Group sprint on Wed/Thu DIY activities continue.
Week 4+1: Evaluation of results. Goal of the next activity....
During the group sprints there would be secondary DIY tasks for those happy to participate but not fond of the main goal of the week.
If one group needs more than one activity per month they can start overflowing the following week, resulting in simultaneous testing & bugs activities.
Compared to the current situation, this wheel looks powerful and at the same time relatively easy to set up. There will plenty of things to improve and fine tune, but probably none of them will require to stop the wheel.
What do you think?
On Wed, Jan 23, 2013 at 10:56 PM, Quim Gil qgil@wikimedia.org wrote:
https://www.mediawiki.org/**wiki/QA/Weekly_goalshttps://www.mediawiki.org/wiki/QA/Weekly_goals
The first two events have the same date, Jan 28. Is that on purpose?
Željko
On 01/24/2013 02:10 AM, Željko Filipin wrote:
On Wed, Jan 23, 2013 at 10:56 PM, Quim Gil qgil@wikimedia.org wrote:
https://www.mediawiki.org/**wiki/QA/Weekly_goalshttps://www.mediawiki.org/wiki/QA/Weekly_goals
The first two events have the same date, Jan 28. Is that on purpose?
No, but it's real. :)
Let's take it as an initial wheelslip, fixed in the following weeks. :)
The Mobile team is committing to the next Features testing week starting on Feb 25. Yay!
Who else? Note that these QA weeks are open to ALL projects, not just Wikimedia Foundation teams.
We in QA discussed some possibilities for the browser test automation community activities, and we suggest that the first couple of community events be educational. In particular, we think it would be beneficial to start with some introductory topics to be presented as a hangout+IRC chat+documentation on the wiki. Our suggestions for the first two events:
* how anyone can write an Test Scenario to be automated (and why this is important!) * how to read, understand and analyze results in the Jenkins system we have for browser automation
-Chris
On Wed, Jan 23, 2013 at 2:56 PM, Quim Gil qgil@wikimedia.org wrote:
This proposal got a basic agreement and is being implemented at
https://www.mediawiki.org/**wiki/QA/Weekly_goalshttps://www.mediawiki.org/wiki/QA/Weekly_goals
A rough start is expected in the first iteration of the four areas but we hope to have improvements every week.
Get involved!
Development teams: your proposals for testing & bug management weekly goals are welcome.
On 01/16/2013 02:25 PM, Quim Gil wrote:
There are ongoing separate discussions about the best way to organize testing sprints and bug days. The more we talk and the more we delay the beginning of continuous activities the more I believe the solution is common for both:
Smaller activities and more frequent. Each one of them less ambitious but more precise. Not requiring by default the involvement of developer teams. Especially not requiring the involvement of WMF dev teams.
Of course we want to work together with development teams! But just not wait for them. They tend to be busy, willing and at the same time unwilling (a problem we need to solve but not necessarily before starting a routine of testing and bug management activities. If a dev team (WMF or not) wants to have dedicated testing and bug management activities we will give them the top priority.
Imagine this wheel:
Week 1: manual testing (Chris)
Week 2: fresh bugs (Andre)
Week 3: browser testing (Željko)
Week 4: rotten bugs (Valerie)
All the better if there is certain correlation between testing and bugs activities, but no problem if there is none.
From the point of view of the week coordinators this is how a cycle would look like:
Week 1: decide the goal of the next activity.
Weeks 2-3: preparing documentation, recruiting participants.
Week 4: DIY activities start. Support via IRC & mailing list. Group sprint on Wed/Thu DIY activities continue.
Week 4+1: Evaluation of results. Goal of the next activity....
During the group sprints there would be secondary DIY tasks for those happy to participate but not fond of the main goal of the week.
If one group needs more than one activity per month they can start overflowing the following week, resulting in simultaneous testing & bugs activities.
Compared to the current situation, this wheel looks powerful and at the same time relatively easy to set up. There will plenty of things to improve and fine tune, but probably none of them will require to stop the wheel.
What do you think?
-- Quim Gil Technical Contributor Coordinator @ Wikimedia Foundation http://www.mediawiki.org/wiki/**User:Qgilhttp://www.mediawiki.org/wiki/User:Qgil
______________________________**_________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/**mailman/listinfo/wikitech-lhttps://lists.wikimedia.org/mailman/listinfo/wikitech-l
Thanks Chris & Zeljko. We have two more weeks booked now:
https://www.mediawiki.org/wiki/QA/Weekly_goals
On 01/24/2013 09:30 AM, Chris McMahon wrote:
We in QA discussed some possibilities for the browser test automation community activities, and we suggest that the first couple of community events be educational.
Very good idea! Still, all the better if we can define actual contribution goals around those educational activities.
For instance:
In particular, we think it would be beneficial to start with some introductory topics to be presented as a hangout+IRC chat+documentation on the wiki. Our suggestions for the first two events:
- how anyone can write an Test Scenario to be automated (and why this is
important!)
Booked for the week starting on Feb 11 as
Write your first Test Scenario in plain English
The educational approach is exactly how you expose it. The practice is to write one test scenario at the end of the week. We will give actual scenarios to volunteers asking for them and we will help them getting through.
At the end of the week we should have not only a group of people theoretically knowing how to write test scenarios in plain English, but also a real set of new descriptions ready to go for the next step.
- how to read, understand and analyze results in the Jenkins system we have
for browser automation
A good proposal, booked for the week starting on Mar 11. Please help defining what could be the practice, the actual contribution made by participants at the end of the week.
- how to read, understand and analyze results in the Jenkins system we
have for browser automation
A good proposal, booked for the week starting on Mar 11. Please help defining what could be the practice, the actual contribution made by participants at the end of the week.
It's right here if you want to take a look: https://wmf.ci.cloudbees.com/ -C
On 01/24/2013 10:52 AM, Chris McMahon wrote:
- how to read, understand and analyze results in the Jenkins system we
have for browser automation
A good proposal, booked for the week starting on Mar 11. Please help defining what could be the practice, the actual contribution made by participants at the end of the week.
It's right here if you want to take a look: https://wmf.ci.cloudbees.com/
What we need to decide is what useful contribution can we ask volunteers to do with it during the week of training.
wikitech-l@lists.wikimedia.org