It’s always been possible to read or edit from a mobile phone or tablet using the desktop interface, but I agree that the development of more mobile friendly tools alters things, but still I would argue having some “measurements” (if KPI is too biased a term) is useful to judge whether the effort put into those tools is worth it for the return. Does adding features alter the way Wikipedia is edited and is it for better or worse? For example when the new mobile tool was released a little while ago, I saw a lot of edits tagged “Mobile Edits” that were vandalising Wikipedia. Fortunately it died down, but obviously if most of the edits coming from mobile tools were vandalism, we might well ask if it is worth having.  

 

I often check my watchlist on my iPad and “knock off” the easy ones “ok, ok, ugh vandalism revert, ok, ok, leave that one until I’m on my laptop, ok, ok, ok” so it may be that people just reorganise their editing around the device they are using in which case it might change what they do from hour to hour but not over (say) a week.

 

I think we have a range of metrics to pick from in terms of quantity of activity (how many, how large, etc). What we probably need to complement are some kinds of quality metrics. We could go with some easy ones (how often an edit summary is left, how long is the edit summary), how often is the activity a revert or a reverted edit, are the edits to mainspace, talk, user, user talk, etc. But I think we do want to consider things like the macroscopic “quality assessment” issues.

 

Kerry

 

 


From: wiki-research-l-bounces@lists.wikimedia.org [mailto:wiki-research-l-bounces@lists.wikimedia.org] On Behalf Of Gerard Meijssen
Sent: Sunday, 14 September 2014 3:23 AM
To: Research into Wikimedia content and communities
Subject: Re: [Wiki-research-l] What works for increasing editor engagement?

 

Hoi,

The problem with this approach is that as it is, the functionality for editing on tablets and phones is not well developed at all. As a consequence the results will not be that meaningful. 

 

It is only recently that it became possible to edit. So realistically there are several important factors... The development of enabling technology, the numbers of readers from a tablet / mobile.

 

The personal argument of current editors that they prefer their computer for complex stuff essentially makes the newbies on that other platform second class citizens. The realisation that currently our technology favours computer usage is not. The first is an argument that sounds like "do not bother, it does not matter", the other leaves room for "we need to work on improving the mobile/tablet experience".

 

Arguably, calling things a KPI may mean a bias from the start.

Thanks,

      GerardM

 

On 13 September 2014 14:41, Kerry Raymond <kerry.raymond@gmail.com> wrote:

It would be very interesting to know the size of edits done on mobile vs desktop (it would be even better if we could distinguish between phones and tablets because of the different form factors. I appreciate that we have the problem of definition as a person on a phone can use the desktop interface and vice versa, so there's a matrix of device and interface potentially.

 

when I say "size of edit", I would really prefer to know the size of the delta, not the difference in the size of the article as reported in the history. My personal hypothesis is that the smaller the form factor the smaller the edits. As much as I love my ipad, it is no substitute for my laptop for serious editing, most edits are harder and slower on the ipad than my laptop, and it's a pain to,do citations on a mobile device. If my hypothesis is correct, I am not personally convinced that the loss of a desktop edit is compensated by the gain of a mobile edit, even it results in the same total number of edits, I think the extent to which an article is improved will be lower on mobile (on average). Not sure how we measure that but KPIs like size of delta and addition of citations would be something that might be interesting. Or, with enough data, we could use the automatic assessment tool to look for articles that change assessment (as measured by the tool) and look at the mobile vs desktop edit counts and ratios etc.

Sent from my iPad


On 11 Sep 2014, at 8:20 pm, Gerard Meijssen <gerard.meijssen@gmail.com> wrote:

Hoi,

The point of research is that it provides us with understanding that indicates one way or the other the problems  we face and, how we are trending towards success or failure.

 

Thanks to numbers we know the extend of the growth of our mobile readers and editors. The trend is uncontroversial; it grows and it offsets the readers and editors that are declining from computers. Simple research shows that talk pages are unworkable on mobiles and tablets.

 

Dear Pine, do you agree that such research exists, do you agree that I fairly summarize the data that is available ?

 

When you want more engagement by our public, ask yourself how can we use our numbers and analyse what might point to things where we could / should mobilise our community. Numbers that show clearly why it makes sense for us to ask volunteers to volunteer. I give you one set of numbers we do not have... The number of negative results from the searches in our Wikipedias individually.

Thanks,

     GerardM

 

On 11 September 2014 08:00, Pine W <wiki.pine@gmail.com> wrote:

Hello research colleagues,

When I look at the WMF Report Card, it appears to me that the global active editor stats and the number of new accounts being registered per month has been relatively flat since at least 2011.

Those of you who work in EE research and analytics, I would like to ask if there is a summary of techniques that you have found that do produce statistically significant results in improving editor retention. I know that some of you write tools, design projects, or pull and analyze data about editors. It looks to me like WMF is investing significant effort in research and tool creation, but we're not moving the needle to create the results that we had hoped to achieve. So I'd like to ask what have we learned from all of our time working on editor engagement about techniques and programs that do improve the EE stats significant ways, so that we can hopefully accelerate the implementation of programs and techniques that have demonstrated success.

 

I'd also like to ask what barriers you think prevent us from becoming more effective at improving the number of users who register and the number of active editors. For example, are users who go through GettingStarted often being deterred by quickly being confronted by experienced editors in ways that make the newbies want to leave? If that is a significant problem, how do you suggest addressing this?

One of my concerns about investing further in developing Flow, analytics tools like like WIkimetrics, and further complex editor engagement research projects, is that the most important challenges related to editor engagement may be problems that can only be solved through primarily interpersonal and social means rather than the use of software tools and mass communications. I like Wikimetrics and I use it, and I think there's an important place for analytics and tool development in EE work, but I wonder if WMF should scale up the emphasis on grassroots social and interpersonal efforts, particularly in the context of the 2015+ Strategic Plan and Jimmy's speech at the 2014 Wikimania. What do you think,and if your answer is yes, how do you think WMF can do this while respecting the autonomy and social processes of the volunteer projects?

Thanks,

Pine


_______________________________________________
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l

 

_______________________________________________
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


_______________________________________________
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l