Hey folks,
I'm glad the presentation came across so well. I really appreciate the
discussion.
Pine, I really appreciate those plots that you linked. It seems that you
can identify the progression through barrier types by following the
hexagonal graphs clockwise. Concerns start with complex rules and (to a
lesser extend) the difficulty of editing and progress to concerns negative
social behavior and access to reference materials.
Regarding editathons, I'm not quite sure the right way to measure their
effects. I suspect that one of the biggest effects of editathons are the
result of discussions that people have with their friends and family after
the event. "I edited Wikipedia and it was fun. It turns out that there's
a lot of different types of ways to contribute. You don't have to be an
expert." -- is a conversation I imagine is relatively common after an
editathon. The awareness (I can edit Wikipedia?!), new registrations and
contributions that result from such once-removed discussions would be
nearly impossible to track.
Jane, seem more of my work exploring the rising social/motivational
barriers here:
In the
conclusion of that talk, I bring up Snuggle[1] as an example of a
technological strategy for supporting desirable social behaviors. My
recent work on the Revision Scoring[2] was originally inspired by my work
to extend Snuggle beyond English Wikipedia -- I needed vandalism prediction
scores beyond English Wikipedia! Generally, I think we (as Wikipedian
community members) have a lot deeper insight into the types of behaviors
(e.g. reactions to newcomer contributions) that are desirable than we had
in 2006 and that, if we were to redesign counter-vandalism tools from
scratch with these insights in mind, we'd be able to dramatically reduce
this type of social/motivational barrier. I think Snuggle is a good
example of such a new type of tool and the idea with Revision Scoring is
that I'd like to make it *really easy* for others to experiment with their
own strategies. The next thing I want to do is to try empowering
WikiProjects with automated quality control/socialization tools. I suspect
that, WikiProject members will be highly motivated to socialize potential
good newcomers and help them work productively within the topical context
of their WikiProject -- if they had the means to do so efficiently.
1.
2.
-Aaron
On Mon, Aug 3, 2015 at 7:36 AM, Jane Darnell <jane023(a)gmail.com> wrote:
OK I am replying to this mail, as this one has the
link to Youtube in it
with the two presentations. I am only responding to the first presentation
by Aaron here.
In general I like the idea of focussing attention on the "New Editor
Activation Funnel". This area is of course the reason why we have a decline
in new editors, and it all has to do with an increase in "barriers to
entry" (which btw I am not convinced is the same thing as "technical
impediments"). It is useful to split these barriers up into Permission,
Literacy (here wikimarkup is lumped together with policies), and
Social/Motivational (human interaction) issues, but I think the whole
presentation misses the point on the need for more dissection of the
reverts problem (shown a bit towards the end).
I personally think that demotivational behavior by experienced Wikipedians
is the biggest factor in the decline of new editor contributions, but
unlike most people I don't think this has to do with what the experienced
Wikipedians do, but rather what they don't do. They don't welcome people in
person (because they don't see their edits) and they don't give timely
feedback on first edits to pages on their watchlist (no way to see if those
edits are first time edits). They don't show them the ropes in that if one
wants to make a BLP, or an article about a company or building or place, or
an article about an artwork, you should look at existing examples and start
from there. Having said this, I do think we spend an inordinate amount of
time on things like extending the page about WHAT WIKIPEDIA IS NOT (which
btw I have yet to read). It seems that our best way of dealing with
newcomers is to throw CAPS at them, though we all hate CAPS.
The point of this study was to prove these two: H1: VE will increase the
amount of desirable edits by newbies and H2: VE will increase the amount of
undesirable edits by newbies (aka VANDALISM). Guess what? Both H1 & H2 show
no significance and if anything, less vandalism came from VE editors. I
could have told you that beforehand - yawn. It angers me when people assume
that others are not technical enough for Wikipedia. Sorry, but it is not
rocket science.
This type of thinking is not just on Wikipedia, I see this also in health
occupations, where doctors tell their patients not to go look things up on
the Internet. Just trust the doctors because they studied it! Yeah right,
like I am going to trust all aspects of my future health and well-being to
someone who sees my future health and well-being as a 10-minute interlude
in their 9-5 workday. No, I will nod politely (one must always remain
friendly) while googling my way to better health, thanks. And if I want to
make an article about something that I think needs an article on Wikipedia,
I am going to try to do it on my own as far as I can get, and I am probably
not interested in talking about it until I am done. The whole AfC queue
thing is absolutely horrible because it puts these edits on ice until the
person totally forgets what the password was that they dreamed up for their
user account. As far as spelling corrections go, if I correct an error and
see it deleted (like from Kiev to Kyiv, which will be reverted by a bot
probably), then I will probably not come back.
I am very eager to hear more about the revision scoring though! I wish
there was a better way to do that than manually however.
Jane
On Wed, Jul 29, 2015 at 8:07 PM, Leila Zia <leila(a)wikimedia.org> wrote:
A friendly reminder that this is happening in 23
min. :-)
YouTube stream:
https://www.youtube.com/watch?v=vGyrVg_qKSM
IRC: #wikimedia-research
Best,
Leila
On Mon, Jul 27, 2015 at 2:47 PM, Leila Zia <leila(a)wikimedia.org> wrote:
Hi everyone,
The next Research showcase will be live-streamed this Wednesday, July 29
at 11.30 PT. The streaming link will be posted on the lists a few minutes
before the showcase starts (sorry, we haven't been able to solve this, yet.
:-() and as usual, you can join the conversation on IRC at #wikimedia
-research.
We look forward to seeing you!
Leila
This month:
*VisualEditor's effect on newly registered users*By *Aaron Halfaker*
<https://www.mediawiki.org/wiki/User:Halfak_%28WMF%29>
It's been nearly two years since we ran an initial study
<https://meta.wikimedia.org/wiki/Research:VisualEditor%27s_effect_on_newly_registered_editors/June_2013_study>
of VisualEditor's effect on newly registered editors. While most of the
results of this study were positive (e.g. workload on Wikipedians did
not increase), we still saw a significant decrease in the newcomer
productivity. In the meantime, the Editing
<https://www.mediawiki.org/wiki/Editing> team has made substantial
improvements to performance and functionality. In this presentation, I'll
report on the results of a new experiment designed to test the effects of
enabling this improved VisualEditor software for newly registered users
by default. I'll show what we learned from the experiment and discuss some
results have opened larger questions about what, exactly, is difficult
about being a newcomer to English Wikipedia.
*Wikipedia knowledge graph with DeepDive*
By *Juhana Kangaspunta* and
*Thomas Palomares (10-week student project)*
Despite the tremendous amount of information present on Wikipedia, only
a very little amount is structured. Most of the information is embedded in
text and extracting it is a non-trivial challenge. In this project, we try
to populate Wikidata, a structured component of Wikipedia, using
DeepDive tool to extract relations embedded in the text. We finally
extracted more than 140,000 relations with more than 90% average precision.
We will present DeepDive and the data that we use for this project, we
explain the relations we focused on so far and explain the implementation
and pipeline, including our model, features and extractors. Finally, we
detail our results with a thorough precision and recall analysis.
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l