*tl;dr: The Mobile Apps Team is going to trial seamlessly surfacing full
text search results instead of prefixsearch results in cases where
prefixsearch does not give satisfactory results.*
The Mobile Apps Team has received a lot of feedback that our search feature
isn't the best. Our latest metrics confirm this; around 19% of queries give
the user no results. Our working hypothesis is that this is because we use
prefixsearch on article titles, which is very insensitive to typos and
To help with this, we implemented full-text search. The user has two
options for searching, prefixsearch and full text. You can see this example
<http://i.imgur.com/kxymYF4.png> to see what these options look like.
However, the way we present the two options to users is suboptimal. There's
no clear mental model for when one should be used compared to the other.
The design team recommended that we simply present whichever result set is
better for any given query. But how do we decide which result set is
better? To validate this course of action, we audited which of the two
options, prefixsearch or full text search, was better for a set of queries.
The results are here
The takehomes of our audit:
- In cases where there are very few prefixsearch results (less than
around 5), the full text results are just as good or better than the
prefixsearch results. Often, this is because the "did you mean"
functionality of the full text API helps the user out.
- In cases where there are a good number of prefixsearch results, the
prefixsearch results tend to be better than the fulltext results.
Here's what we're going to try:
- Remove the buttons from the UI.
- By default, use prefixsearch for searches.
- If there are fewer than 5 prefixsearch results, show fulltext search
Metrics for success:
- Higher search clickthrough (users finding what they need more)
- Fewer number of queries give 0 results (users served more results)
- Fewer number of queries per search session (users finding what they
The advantage of this experiment is that it's safe to fail: there is no
actual UX change, so if we decide our solution isn't good enough, then we
can rollout the fallback of surfacing the buttons without users thinking
we're just endlessly tweaking the UI.
Please do get in touch if you have any questions!
Associate Product Manager, Mobile Apps
I noticed that clicks to the hamburger menu icon in the top left
corner and the languages icon at the bottom of the page were very
similar , so I thus thought it would be interested to look at both
of these elements on a per project level.
I took a sample day and constructed this table comparing clicks to the
language button compared to the languages button.
I then divide clicks to hamburger by clicks to languages.
My hypothesis is that if clicks to the language icon are considerably
higher than clicks to the hamburger icon the less recognisable the
hamburger icon is in that language.
I filtered out wikis where the clicks to languages were less than 50,
as I decided the data set for those were too small.
Interestingly for enwiki the score is close to 1 (0.9625941071), a
language I would expect this icon to translate well.
For azwiki (Azerbaijani language) the language button has
considerably more clicks - 1366 compared to 487 (0.3565153734)
Other WIkipedias where the hamburger icon might not be translating
well (where score is less than 0.5):
Bosnian, Polish, Japanese, Korean.
I've shared my data on a public URL, feel free to explore, analyse, comment .
As a next step it would be interesting to pick a project e.g.
Japanese, monitor clicks to hamburger vs language and see how these
values change with a different value.
 http://mobile-reportcard.wmflabs.org/#graph-limn225 ui daily graph
Hi mobile folk,
After 48 patches merged, I have updated all the browser tests in the
MobileFrontend repo to conform to RSpec3 syntax. Along the way I did a few
* removed every sleep() statement except one necessary to get around a bug
* consolidated the lines within each step in each test to be as succinct as
* handled and removed a number of FIXME comments
* removed a significant amount of dead/unused/irrelevant code
* made the Feature description of each test step consistent with what each
step actually accomplishes
* removed all the instances where "Then" steps were re-used as Given or
When (they are conceptually different)
On the style front:
* all the steps are now in alphabetical order according to Given/When/Then,
and all the GWT specifications in the .feature files conform to their
corresponding implementations in steps files.
* in the Features, every Then step contains the word "should", in the steps
files, every Then step contains an RSpec assertion
* no Given or When steps contain either the word "should" or an RSpec
This all should make working in the browser test repo significantly easier
and more straightforward, as well as making far better use of the most
modern implementation of RSpec.
For my next trick I am going to make the browser test repo conform as
closely as possible to rubocop style rules, but the heavy lifting with
regard to technical debt in the browser test repos is mostly handled.
Let me know if you have any questions or if you would like a tour...
fyi, a feedback from duala, cameroon, concerning uploading fotos to commons.
---------- Forwarded message ----------
From: Kasper Souren <kasper.souren(a)gmail.com>
Date: Wed, Dec 3, 2014 at 12:01 AM
Subject: Re: [African Wikimedians] Afripédia Douala
To: Mailing list for African Wikimedians
On Tuesday, December 2, 2014, Florence Devouard <anthere(a)anthere.org> wrote:
> I wanted to outline that both Kumusha Takes Wiki and Wiki Loves Africa are being conducted in English and French
I see the contest is over now, will there be another one coming up?
> I was quite disappointed by the limited participation of Cameroom to the photo contest. Given the effort already been done in that country to train editors and to promote the project, I expected more input.
While trying to upload some pictures to
starting to understand at least one part of the issue. Internet
connections are really bad. Very high ping times, both at the French
institute as well as in the hotel I'm staying in now, which I can't
easily consider cheap (at least in terms of pricing).
Is there a robust way to upload pictures to Commons over bad internet
Facebook and G+ Android apps have done a fairly good job at uploading
pictures automatically, but now I still need to first download them to
my laptop and then upload them through the Upload Wizard, which is
failing me. Using the Commons Android app is not a good alternative
from the hotel connection because it wants me to enter a user/pass
combination too often on my phone.
African-Wikimedians mailing list
Still fighting this awful head/chest cold and I didn't get any rest last night so I just feel terrible. Going to take the day to just sleep.
Reminder: I'll also be off Friday as Candace and I have a lengthy doctors appointment scheduled.
On Wed, Nov 26, 2014 at 8:01 PM, Joaquin Oltra Hernandez
> phantomjs cucumber tests ~20% faster that firefox (100% less annoying)
> phantomjs cucumber tests have more failing tests than firefox
> cucumber tests have tags.
> we should tag important/fast tests, and run them more often in dev
> I've been experimenting with the browser tests to see if it would make any
> difference running them headlessly or w/ phantomjs in speed of execution.
> First, in OSX it is not supported running the cucumber tests in headless
> mode (env var HEADLESS=true with any browser), so I haven't been able to
> time that.
I'm not sure if HEADLESS=true is needed any more..
> When running the cucumber tests on my machine, my results have been 19m30s
> for Firefox, and 15m for phantomjs.
> It is not a huge improvement, but still, around 23% speed up, so it is worth
> it. Also, the browser window is not stealing focus from you while you work
> every time a new test launches, so I would say that is a major win, at least
> for me.
> I'll investigate a bit more since I get more failing tests in phantom js
> than with firefox (28 vs 13) and also why I'm getting all those failing
> tests even in firefox. If anybody has faced this issues, I'd love some help
> About executing an important subset, as we discussed, browser tests have
> tags both for features and scenarios, and you can execute only tests with a
> certain tag by doing bundle exec cucumber --tags @tagname for example. You
> can see an example of tagging feature and scenario in mainmenu.feature for
> @chrome @en.m.wikipedia.beta.wmflabs.org @firefox @test2.m.wikipedia.org
> Feature: Menus open correct page for anonymous users
> Scenario: Nearby link in menu
> I think it would benefit us a lot to tag both important and fast tests, so
> that we run them more often in development and catch more regressions
I agree. @fast and @important tags would be useful.
> To run browser tests in mobilefrontend with phantomjs installed just do
> BROWSER=phantomjs make cucumber
> More info:
I've cc'ed the qa and mobile-l public mailing lists as I think this is
a useful discussion and will gather more expertise :).