I'd go further and say it's extension of bullying that my grandmother would
have called "little hitlers" (officiousness) or perhaps more scientifically
the kind of behaviour observed in the Stanford prison experiment
https://en.wikipedia.org/wiki/Stanford_prison_experiment
In Wikipedia, we have created a situation where any editor can be judge,
jury and executioner over other people's edits. We provide the anonymity of
pseudonyms too, just to make it a little bit easier. We have a civility rule
that evidently sets the bar so low that we have editors calling a female
editor a "cunt" and it being written off as "oh, that's just SoAndSo" (on
the grounds that this particular contributor uses that kind of language all
the time) -- read the gender gap mailing list if you want more charming
examples of that sort of thing.
This recent article in The Guardian might be a bit over-the-top:
http://www.theguardian.com/science/brain-flapping/2014/sep/12/comment-sectio
ns-toxic-moderation
but it does make the point pretty well. For everyone who tries to make a
contribution, we have an army of folks with apparently nothing better to do
than pass negative judgement (without making any positive contribution of
their own).
For myself, I'd reduce the power for anyone to revert edits down to some
small set of circumstances, e.g. vandalism, patent nonsense and derogatory
comments on living people without citation, and enforce this with a tick the
box to indicate the grounds you are reverting on. Other concerns could be
"reported" but not reverted (done like "thank" with a text box to raise the
concern). I would introduce some new roles/rights called something like
"curator" in relation to different topic areas (probably linked to category
or maybe Project -- noting that many Projects are moribund though) assigned
to those judged by the peers (like Request for Administrator) to have a
track record of positive edits and collaborative behaviour (which would
include communication) within that topic area. I'd give these folks the
power to revert on a wider set of criteria within the defined topic area and
the obligation to respond to "reports" within their topic area. Obviously
the intention is that these people would be more able to decide if something
needed a quick revert or a discussion on a talk page or whatever or some
advice to a new contributor etc. This kind of mechanism is used to manage a
lot of online forums and I see no reason why it should not work on WP.
Alternatively, we could introduce a "no thanks" button which silently
records that a logged-in user isn't happy with an edit. We use this data to
look for patterns of "no thanks" involving the same editor as either the one
who did the edit or as the "no thanker" with a view to identifying "problem
people" and trying to do some remedial work around their
behaviour/expectations. It might also reveal early signs of conflict in
articles or categories involving multiple editors which might benefit from
early intervention. Generally the longer arguments go on, the more
entrenched the participants become in their viewpoints (it becomes more
about "winning" rather addressing the original issue). Because the purpose
of the tool is "macroscopic" rather than "microscopic", I actually don't
think the data should generally be public but accessible to those who would
be taking action on it (whatever role that might be). I know some people
will not like the "lack of transparency" but I think to make it public does
not help as victims and bystanders are often afraid to report bullying for
fear of attracting retaliation. Also you don't want single reports to become
an argument in themselves -- it's a macroscopic tool not a microscopic one.
I'd even go further and say that nobody with rights to see the data should
be able to see the data about themselves.
But, this is a research list. What's the research we do to contribute to
solving editor attrition. For myself, it's looking at patterns of edits (and
reaction to those edits by others) in the days/weeks prior to a long-term
active editor becoming inactive. Is there any difference to the normal
pattern of their edits and edit reactions? By reaction, I mean reverts or
other "non-survival" of edits on a mainspace page or response on a talk/user
talk page. If it's conflict that drives people away, you'd expect to see
edits not surviving or an increased level of talk (with negative sentiment
if we can do sentiment analysis on the edit summaries or the talk messages).
For newer editors, I think we just look at their reverts. Being new, we
don't have a history of "normal situation" to compare against. Where are
they editing, why are being reverted? Obviously we'd like to unpack why good
faith edits are being reverted and see what might be done about it. Aside,
my personal belief is that new editors don't know about Talk and User Talk
and it's our consequent inability to communicate with them that makes them
walk away when their edits disappear (without explanation as they will see
it). I think if you could talk with them, you could probably help them
achieve what they were trying to do or nicely explain why it can't be done.
Our insistence on allowing anonymous edit and sign-up without an email
address works against being able to help newcomers.
Kerry
-----Original Message-----
From: wiki-research-l-bounces@lists.wikimedia.org
[mailto:wiki-research-l-bounces@lists.wikimedia.org] On Behalf Of Stuart A.
Yeates
Sent: Sunday, 14 September 2014 7:15 PM
To: Research into Wikimedia content and communities
Subject: Re: [Wiki-research-l] What works for increasing editor engagement?
My personal hypothesis is that much wikipedia incivility is part of
the broader internet-troll phenomenon (google "Don't Read The
Comments" if you're unfamiliar with the effects of trolling). I'd be
very interested to see a linguistic comparison between classes of
edits/comments tagged as 'bad' across a range of sites which allow
unmoderated comments.
Being able to confirm that large part of the problem was actually part
of an internet-wide problem rather than a local problem would be a big
step forward.
It worries me that the WMF may, by making the wikipedia interface more
similar to other discussion systems, reduce the differences between us
and the troll-infested platforms and make it psychologically easier
for those who troll on other platforms to troll on wikipedia.
cheers
stuart
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l