Pertinent data to our discussion includes the number of occurrences of sockpuppetry, bots,
trolls, spam, vandalism, fake news, disinformation and election interference on Wikipedia.
Are these not important problems to be solved?
Regarding online harassment, which I opine is also an important problem to be solved,
there could be a number of categories of scenarios to consider: whether the harassing
party is anonymous, pseudonymous or using their real name, and whether the harassed party
is anonymous, pseudonymous or using their real-name. There appears to be a total of 9
categories of scenarios to consider and that could be a part of the complexity discussing
the topic of mitigating online harassment. I can put some thought into innovations for
Wikipedia in this regard.
You might be interested in: Jang, M., Foley, J., Dori-Hacohen, S., & Allan, J. (2016).
Probabilistic Approaches to Controversy Detection. In Proceedings of the 25th ACM
International on Conference on Information and Knowledge Management (pp. 2069–2072). New
York, NY, USA: ACM. https://doi.org/10.1145/2983323.2983911
. As we can consider the
detection of controversy, we might also be able to algorithmically detect occurrences of
harassment in Wikipedia discussion areas.
Regarding whether real names promote safer or less safe communities, I observe from your
hyperlink that those voicing concerns with respect to Facebook and real names, the
Nameless Coalition, include: Access, ACLU, Center for Democracy and Technology, Digital
Rights Foundation, Electronic Frontier Foundation, #forabetterFB, Global Voices Advocacy,
Human Rights Watch, Internet Democracy Project, One World Platform, Point of View and Take
Back the Tech.
Providing options for users to display a real name or pseudonym, including after optional
account verification processes, seems to address the specific concerns raised by the
From: Wikitech-l <wikitech-l-bounces(a)lists.wikimedia.org> on behalf of David Barratt
Sent: Monday, May 7, 2018 9:49:35 AM
To: Wikimedia developers
Subject: Re: [Wikitech-l] MediaWiki and OpenID Connect
"As aforementioned, system administrative configuration options could
include allowing verified users to choose whether to display their real
names or to display their usernames."
I think you might be missing the point, we don't even want to collect that
data in the first place, let alone allow non-marginalized users to display
their real name (thereby identifying the marginalized users who opted out).
It is clear that using your real name has real-world harm (
and there is a coalition to stop Facebook from enforcing this policy (
In fact, Wikipedia itself recommends against using your real name (
). I truly regret
using my real name in my personal user account (
), just from my
) you can
very easily figure out the region where I live. Next, put my last name into
a public record search for my region (
) and you will have my
physical address and a nice picture of where I live.
Thankfully I am not in a marganziled group, but I hope you see how easy
that would be. Even if the identity is not displayed, if it's stored in our
database, there is a potential for that data to be exposed.
I will probably take a moment at some point to change my username to a
pseudonym (when I come up with a good one), but username changes are
public, so it would be pretty easy to search for my new username and find
the record of the name change.
The pew study you referenced (
) does not suggest
that users were anonymized only that a plurality of the victims of
harassment "said a stranger was responsible for their most recent
incident." Also, correlation does not imply causation (
There are many things we could do to mitigate harassment on Wikipedia, but
a real name or account verification is not one of them. As an experiment, I
would recommend finding, and attempting to go through, all of the steps
required to report harassment. It's depressingly difficult. Another thing
we can do is implement granular blocks (
) to block accounts from specific
pages, categories, or namespaces rather than a whole site block.
There are probably many more ideas that can be implemented, but again, I
don't know of any evidence that a real name or account verification policy
actually works, but there's a growing consensus that the policy causes more
harm than good.
On Sat, May 5, 2018 at 7:50 AM Adam Sobieski <adamsobieski(a)hotmail.com>
Thank you again for the concerns and comments
responded to in the
Two distinct topics are discussed: (1) MediaWiki should provide its
software users with OpenID Connect functionality including to verify their
names and accounts via account linking, and (2) Wikipedia should make use
of such MediaWiki features.
On topic one, there are a large number of MediaWiki software users 
and use case scenarios. On topic two, “MediaWiki's most famous use has been
in Wikipedia” , thus it makes sense to carefully discuss MediaWiki
software and Wikipedia simultaneously.
Importantly, we can consider options for configuration for system
administrators (users of MediaWiki software) and their end users. For
example, a system administrative option could be whether to activate the
account verification features. Account verification could then be either
optional or required for their users. Allowing their users to verify their
accounts, system administrative configuration can specify whether their
verified users have a configuration setting with regard to whether to
display their real names or their usernames.
Configuration options for MediaWiki system administrators and subsequently
contingent configuration options for end users can maximize utility for all
parties concerned across a large number of MediaWiki use case scenarios.
In this way, Wikipedia can choose whether and configurably how to utilize
MediaWiki account verification features in a manner that exactly aligns
with their policy.
“In 2012, Wikipedia launched one of its largest sockpuppet investigations,
when editors reported suspicious activity suggesting 250 accounts had been
used to engage in paid editing.” 
“On August 31, 2015, the English Wikipedia community discovered 381
sockpuppet accounts operating a secret paid editing ring.” 
PAGE PROTECTION OPTIONS AND ACCOUNT VERIFICATION
System administrators of MediaWiki could configure MediaWiki to allow
administrative users the capability to protect content with a mode such
that only verified accounts could edit that content.
“Why would this be desirable?”
Reasons include, but are not limited to: (1) per the rationale for
semi-protection  and extended confirmed protection , (2) an
administrator determines that one or multiple unverified accounts involved
in an incident are sockpuppets, bots, trolls, spammers, vandals, etc, (3)
detecting or preventing conflict of interest editing.
TWITTER ACCOUNT VERIFICATION CONTROVERSY
“Given how much controversy and unhealthy dynamics surrounds verified
accounts on Twitter, I do not think it is a good idea to copy it.”
Some of the confusion around account verification on Twitter stems from
misunderstanding. What is the checkmark? What does it mean? What does
account verification mean when controversial figures’ accounts are
verified? Some people, for example, misunderstood that it suggested
endorsement by Twitter. “In July 2016, Twitter announced […] that
verification ‘does not imply an endorsement.’” 
ACCOUNT VERIFICATION AND USER PAGES
“If there's an individual need to establish link between legal identity of
somebody and their Wiki credentials, there are easy ways to do it – e.g.
publish a signed message both on wiki user page under the account and on
the resource known to be controlled by the person, etc.”
System administrators could also configure for users to have an option to
select for their linked account pages to be hyperlinked to from their user
pages. Collaboration on Wikipedia articles can be an opportunity to
socialize and connect on other websites.
REAL NAMES AND SAFETY
“Everywhere online, exposing your real life identity means a possibility
of real life problems: stalking, harassment, attempts to get someone you
have a content dispute with fired. And I'm not even theorizing: all the
above things have happened on Wikipedia, multiple times.”
“Requiring people to take that risk to edit certain pages is not really a
As aforementioned, system administrative configuration options could
include allowing verified users to choose whether to display their real
names or to display their usernames.
Proponents of real-name policies include Mark Zuckerberg. “Facebook’s CEO
and Founder Mark Zuckerberg defended [Facebook’s real-name policy], saying,
‘We know people are much less likely to try to act abusively towards other
members of our community when they’re using their real names.’ A Pew
Research study from 2014 supports Zuckerberg’s claim, proving that ‘half of
those who have experienced online harassment did not know the person
involved in their most recent incident.’” 
SOCIAL MEDIA EXECUTIVES
“Social networks want to have people's confirmed identities so that they
could sell them to the highest bidder.”
Social networks are in a crisis, as evidenced by recent testimony before
Congress. Their policies with regard to account verification are genuine
attempts to solve problems, some described to them by Congress, not gambits
to acquire more user data. Account verification mitigates sockpuppetry,
bots, trolls, spam, vandalism, conflicts of interest, fake news,
disinformation and election interference.
“Internet experts, for the most part, have welcomed WikiScanner.” 
“Wikipedia co-founder Jimmy Wales spoke enthusiastically about
WikiScanner, noting in one source that ‘It brings an additional level of
transparency to what's going on at Wikipedia’ and stating in another that
it was ‘fabulous and I strongly support it.’ The BBC quoted an unnamed
Wikipedia spokesperson's praise for the tool in taking transparency ‘to
another level’ and preventing ‘an organisation or individuals from editing
articles that they're really not supposed to.’ In responding to the edits
from the Canadian Ministry of Industry, spokesman for the Wikimedia
Foundation Jay Walsh noted that neutrality of language and guarding against
conflicts of interest are two of the central pillars of Wikipedia, adding
that ‘The edits which should be trusted would come from people who don't
possess a conflict of interest, in this case, it would be worthwhile saying
that if someone is making edits from a computer within the government of
Canada … if it was someone within that ministry, that would theoretically
constitute a conflict of interest.’ Wales has speculated on a possible
warning to anonymous editors: ‘When someone clicks on ‘edit,’ it would be
interesting if we could say, ‘Hi, thank you for editing. We see you're
logged in from The New York Times. Keep in mind that we know that, and it's
public information’ … That might make them stop and think.’” 
Wikitech-l mailing list
Wikitech-l mailing list