Speaking from a completely subjective point of view I catch myself (at least a couple times a week) clicking it trying to get to the main page (since that's what I'm used to doing with the globe in the top left corner). You'd think I would have learnt by now but I'm guessing since i'm out of the mobile view so often anyway (many special pages I view fail in it) I still haven't.

James Alexander
Legal and Community Advocacy
Wikimedia Foundation
(415) 839-6885 x6716 @jamesofur


On Sat, Nov 16, 2013 at 12:24 AM, Steven Walling <swalling@wikimedia.org> wrote:


On Saturday, November 16, 2013, Jared Zimmerman wrote:

First step in any of this is getting some instrumentation, without this we're just guessing at how many users it affects. We should at least capture how many expose the menu and how many click on the options within in. 

Good point. If we can just know what % of viewers (logged in and not) click through now, that's a great start. 
 
 

Jared Zimmerman  \\  Director of User Experience \\ Wikimedia Foundation               


On Sat, Nov 16, 2013 at 3:24 AM, Steven Walling <swalling@wikimedia.org> wrote:

On Fri, Nov 15, 2013 at 1:24 PM, Jon Robson <jrobson@wikimedia.org> wrote:
If the A/B
test is limited to anonymous users on all pages, then I would expect
us to still be able to deduce whether minor changes to the UI
encourage clicking (in an audience if 30% of that has never clicked
the icon we would still see differences in click through rate in an
A/B test as 15% of those would be captured in the A/B test).

You can observe an increase or decrease, but the point is that it's meaningless data, because there is no way to determine that what caused it with any certainty. This means you can run a test and collect data, but you can't answer a question like "Does this version make it easier to find the menu, compared to the old version?"

Compare this to tests mobile has run on newly-registered users. While you can't guarantee that they've all never made an account before, we know through careful analysis that it's very likely that the vast majority of new registrations are in fact new people. So when we do a random 50/50 split of new registrations, we're comparing the behavior of two similar populations of users who have never been exposed to both treatments in an A/B test. 

With a random set of readers, you're getting a huge selection of users who might be new, and also many users who have seen some permutation of the site before. With a test like this, there's no way to ensure that a result isn't just do to effects like random exploratory clicking because you introduced something new to people who are  


--
Steven Walling,
Product Manager


_______________________________________________
Mobile-l mailing list
Mobile-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mobile-l