I didn't Cc mobile-ll
---------- Forwarded message ----------
From: Chris McMahon <cmcmahon(a)wikimedia.org>
Date: Mon, Jul 14, 2014 at 6:45 AM
Subject: Re: [QA] [WikimediaMobile] Failing MobileFrontend browser tests
To: "QA (software quality assurance) for Wikimedia projects." <
qa(a)lists.wikimedia.org>
On Mon, Jul 14, 2014 at 5:52 AM, Željko Filipin <zfilipin(a)wikimedia.org>
wrote:
> On Fri, Jul 11, 2014 at 9:35 PM, Arthur Richards <arichards(a)wikimedia.org>
> wrote:
>
>> Chris McMahon sent an email announcing this on June 6 [0]. In addition,
>> Chris said 'The tests for MF on beta labs running in headless Firefox under
>> xvfb are reliably green as of today and we'll be working to keep them that
>> way.'
>>
>
> Running tests using xvfb proved to be more unstable that Sauce Labs. We
> have moved to Sauce until we have some time to investigate failures.
>
Specifically, more than a few simultaneous headless Firefox processes
brings the WMF Jenkins host to its knees, with no viable browser processes
that are usable. And SauceLabs has great diagnostics tools available.
> *Is there anyone currently owning or willing to own digging into and
>> resolving these issues? Can we get any kind of timeline for resolving this *-
>> even if it's just in regards to when the issue will be able to be
>> investigated?
>>
>
> Rob, Chris, as far as I know, I have no big projects at the moment. Should
> I focus on this?
>
The original post in this thread makes me think that Juliusz was analyzing
and resolving these, and then we never heard from him again on this thread.
I normally do this along with Juliusz, Jon, Kaldari, Max, etc. when I'm
not on vacation.
There are three ways a browser test can fail:
1) A bug in the feature.
2) Something wrong with beta labs
3) New, proper behavior from the feature but the test has not been updated..
4) Something wrong with the internet or the browser itself.
In the case of 1 and 2, we file a bug in bugzilla. In the case of 3, we
update the test code. In the case of 4, we code the tests as defensively
as we possibly can.
In all these cases, we have to actually read and understand the test
results in order to know what to do about them. Apropos of that, I have
been thinking for some time about a tutorial for how to analyze browser
test failures and work with the results, I think a number of people would
benefit from something like that.
Apropos of 4), I really want to continue the discussion about the changes I
left hanging for MF tests in gerrit before going on vacation. This one I
think should get +2 very soon, but I really want you MF test developers to
understand why the Page Object design pattern is important:
https://gerrit.wikimedia.org/r/#/c/142605/ . This one is causing problems
in other builds, so I separated out the part that protects the page. This
should really be an API call, but this change is an interim step while we
add the ability to protect a page to the APIPage object:
https://gerrit.wikimedia.org/r/#/c/142605/
There's a curious thing about the latest version of the Quora mobile app.
The newest update is a major redesign. Earlier it was possible to post only
in plain text from mobile devices and to edit existing posts if they had
only plain text. Now it's possible to post and to edit existing posts in
rich text, and it even works well.
So you may want to take a look and take some ideas for mobile VE :)
Quora's rich text editor was always one of the best of its kind on the web,
and now it's on mobile apps, too.
--
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
http://aharoni.wordpress.com
“We're living in pieces,
I want to live in peace.” – T. Moore
Minutes and slides from Wednesday's quarterly review of the Foundation's
Wikipedia Zero (Mobile Partnerships) team are now available at
https://meta.wikimedia.org/wiki/WMF_Metrics_and_activities_meetings/Quarter…
.
On Wed, Dec 19, 2012 at 6:49 PM, Erik Moeller <erik(a)wikimedia.org> wrote:
> Hi folks,
>
> to increase accountability and create more opportunities for course
> corrections and resourcing adjustments as necessary, Sue's asked me
> and Howie Fung to set up a quarterly project evaluation process,
> starting with our highest priority initiatives. These are, according
> to Sue's narrowing focus recommendations which were approved by the
> Board [1]:
>
> - Visual Editor
> - Mobile (mobile contributions + Wikipedia Zero)
> - Editor Engagement (also known as the E2 and E3 teams)
> - Funds Dissemination Committe and expanded grant-making capacity
>
> I'm proposing the following initial schedule:
>
> January:
> - Editor Engagement Experiments
>
> February:
> - Visual Editor
> - Mobile (Contribs + Zero)
>
> March:
> - Editor Engagement Features (Echo, Flow projects)
> - Funds Dissemination Committee
>
> We’ll try doing this on the same day or adjacent to the monthly
> metrics meetings [2], since the team(s) will give a presentation on
> their recent progress, which will help set some context that would
> otherwise need to be covered in the quarterly review itself. This will
> also create open opportunities for feedback and questions.
>
> My goal is to do this in a manner where even though the quarterly
> review meetings themselves are internal, the outcomes are captured as
> meeting minutes and shared publicly, which is why I'm starting this
> discussion on a public list as well. I've created a wiki page here
> which we can use to discuss the concept further:
>
>
> https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings/Quarterly_r…
>
> The internal review will, at minimum, include:
>
> Sue Gardner
> myself
> Howie Fung
> Team members and relevant director(s)
> Designated minute-taker
>
> So for example, for Visual Editor, the review team would be the Visual
> Editor / Parsoid teams, Sue, me, Howie, Terry, and a minute-taker.
>
> I imagine the structure of the review roughly as follows, with a
> duration of about 2 1/2 hours divided into 25-30 minute blocks:
>
> - Brief team intro and recap of team's activities through the quarter,
> compared with goals
> - Drill into goals and targets: Did we achieve what we said we would?
> - Review of challenges, blockers and successes
> - Discussion of proposed changes (e.g. resourcing, targets) and other
> action items
> - Buffer time, debriefing
>
> Once again, the primary purpose of these reviews is to create improved
> structures for internal accountability, escalation points in cases
> where serious changes are necessary, and transparency to the world.
>
> In addition to these priority initiatives, my recommendation would be
> to conduct quarterly reviews for any activity that requires more than
> a set amount of resources (people/dollars). These additional reviews
> may however be conducted in a more lightweight manner and internally
> to the departments. We’re slowly getting into that habit in
> engineering.
>
> As we pilot this process, the format of the high priority reviews can
> help inform and support reviews across the organization.
>
> Feedback and questions are appreciated.
>
> All best,
> Erik
>
> [1] https://wikimediafoundation.org/wiki/Vote:Narrowing_Focus
> [2] https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings
> --
> Erik Möller
> VP of Engineering and Product Development, Wikimedia Foundation
>
> Support Free Knowledge: https://wikimediafoundation.org/wiki/Donate
>
> _______________________________________________
> Wikimedia-l mailing list
> Wikimedia-l(a)lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
>
--
Tilman Bayer
Senior Operations Analyst (Movement Communications)
Wikimedia Foundation
IRC (Freenode): HaeB
(cc-ing the designers, since I'm not sure who all is on mobile-l and who
isn't...)
Yesterday, some folks from design + me did some brainstorming around what
our first set of mobile contributions to Wikidata might look like. As a
reminder, at Q1 planning, we decided on 2 high-level flavors of features in
this area, both of which, thanks to Moiz, have horrible names :)
"Wiki-Tinder" was contributions that don't involve any freeform text input
and are just tapping/swiping – for example, tapping to tag a person's
gender. "Wiki-Twitter" was features that do involve some short freeform
text entry – for example, filling in the short descriptor field to describe
an article in 5 words or less. In this session, we focused on the
Wiki-Tinder side, since the ideas in that bucket were still pretty murky.
We decided on a few parameters up-front:
1) As with the mobile web upload workflow, we want to efface the difference
between all our sister projects so users aren't confused. So all of the
ideas below involve being on a Wikipedia article and being prompted to add
information that helps *Wikipedia*, but actually results in a Wikidata edit.
2) We want these contributions to be meaningful, both for our projects and
for the user – luckily, adding more structured data to Wikidata, no matter
how seemingly simple, will provide a tremendous benefit to all our projects
down the line. We just need to make that clear for our users :) So, we
decided that it's okay to test out some contributions that don't
immediately show up in the body of the article, as long as we can convey
their purpose to the user somehow (e.g., "Thanks! This will make searching
Wikipedia better in the future"). This is analogous to Yelp and Foursquare
prompts ("Is this restaurant good for vegetarians?") that don't immediately
get reflected in content until a certain number of people have answered the
prompt.
3) Since our yearly and quarterly targets are set to raise *new mobile
active editors *(users who register on mobile and go on to make 5+
edits/month on desktop or mobile), we want these games to generate edits.
So while it will be cool in the future to explore features that aggregate a
bunch of different people's inputs and only present information after X
number of responses, we're going to hold off on that for now and focus on a
one-to-one user-contribution model.
Here are the ideas we came up with:
1) *Adding Wikidata properties.* Over half of all items in Wikidata lack
basic information about what they are – all we have is the title of the
item.[1] When a user is viewing an article that has an entry in Wikidata
but no other information, we can present a CTA asking them to choose from
the common top-level Wikidata properties: person, place, event,
organization, etc.[2] We might explore doing this just once, or asking
people to continue categorizing an article several times.
2) *Refining Wikidata properties.* This is similar to the Wikidata games
that Magnus Manske came up with.[3] Once we know the high-level property of
a Wikidata item (whether it's a person, event, organization, etc.), we can
keep adding sub-properties to make that item more detailed. To do this, we
can present a CTA for users to add additional properties – e.g., if they're
on an article about a person, ask users "Does this person have an
occupation?" We'd want to build something generic here, in case we start
running out of items with property X and missing sub-property Y, as Magnus
began to ;)
3) *Wikidata quiz.* This is more about the presentation of 1 & 2 – one way
to think about how we show the CTA is as a quick quiz item for people
who've scrolled down past a certain point on the page.
The designers will start making some wireframes to make all this stuff a
little more concrete – we can go over them as a team at our next planning
meeting (next Wednesday the 16th) and hammer out more of the details. My
hope is that we can get some kind of prototype of 1 or 2 of these ideas –
even just a static set of pages that people can tap through – in order to
do some quick in-person testing before knocking together an alpha version
of the most promising candidate(s) in the sprint after next (last week of
July). If we play our cards right, that means we might have something to
showcase (and test with!) at Wikimania :)
1.
http://ultimategerardm.blogspot.com/2014/07/wikidata-items-with-no-statemen…
2. https://www.wikidata.org/wiki/Wikidata:List_of_properties
3. http://tools.wmflabs.org/wikidata-game/
--
Maryana Pinchuk
Product Manager, Wikimedia Foundation
wikimediafoundation.org
Currently, the Android app does not clearly disclose to users when they're
IP editing that their IP address will be publicly logged.
Dan, would you please review the comments in
https://gerrit.wikimedia.org/r/#/c/143156/ and let us know how to proceed?
Thanks.
-Adam
Vibha and I met briefly today regarding the night-mode color scheme. The
major points that were touched upon:
- The color of links in night mode is a little too bright or "neon-y".
Designers will come up with an updated (less saturated) blue color for
links.
- Designers were entertaining the idea of having a single unified (black)
color for the article background, as well as the ToC and Nav menu:
https://www.dropbox.com/s/5cci14w4qqx2isx/Untitled1.png
As opposed to:
https://www.dropbox.com/s/oa7osa98x2rhbcn/Untitled2.png
Personally, the more I think about it, the more strongly I prefer the
latter scheme (a slightly grey color that stands out from the black article
background). I think black-on-black makes it look a little *too*
minimalist.
-Dmitry
Where should I ssh to to access this folder?
On Thu, Jul 10, 2014 at 2:21 AM, Antoine Musso <hashar+wmf(a)free.fr> wrote:
> Le 10/07/2014 00:57, Juliusz Gonera a écrit :
>>
>> Those are just a few examples from recent failures, but they make
>> tracking regressions really tedious and time consuming. I know we are
>> planning to move away from Saucelabs and use our own servers to run the
>> tests. When will this happen? Is there any deadline?
>
> Hello,
>
> We are sticking to Saucelabs. The service they are offering (booting VM
> with a given browser) cant be reproduced on our own infrastructure
> without a major engineering and hardware investment. Given the price of
> Saucelabs, it is not worth it.
>
> If there are any suspected issues with Saucelabs reaching beta, that
> needs to be investigated, the root cause found and .. Fixed! :-D
>
>
> The beta cluster has full debugging enabled and logs are under
> /data/project/logs , that might help tracking what is happening on the
> server side.
>
>
> --
> Antoine "hashar" Musso
>
>
> _______________________________________________
> QA mailing list
> QA(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/qa
Hi everyone,
So far, we're doing excellently on sprint 35. In spite of it being a
whopping 37 point sprint, we only have 1 point left in the "To do" column.
such velocity, much features, etc.
If you're an iOS engineer that runs out of things to do, sprint 36
currently includes a number of scoped, designed and estimated cards that
were punted to the next sprint. If you're looking to work on features,
please pick those cards up!
If you're an Android engineer, there weren't as many cards punted, so go
rummaging around in the backlog for cool things to do. The "bug backlog"
and "feature backlog" columns are in priority order, but some of the
features aren't well scoped so I'd probably stick to bugs.
As always, if you're looking for guidance on what to do, let me know! I'm
happy to help.
Dan
--
Dan Garry
Associate Product Manager for Platform and Mobile Apps
Wikimedia Foundation
On Thu, Jul 10, 2014 at 2:21 AM, Antoine Musso <hashar+wmf(a)free.fr> wrote:
> The beta cluster has full debugging enabled and logs are under
> /data/project/logs , that might help tracking what is happening on the
> server side.
Would it be possible (and not too much work) to add a cookie, or some
identifier in the request, so it's easier to find the debug logs for a
test that failed?