On Thu, Nov 17, 2016 at 10:30 PM, Pine W <wiki.pine(a)gmail.com> wrote:
As a reminder: IRC is governed by Freenode. Channels
can have their own
rules, and there are widely varying systems of internal governance for
Wikimedia IRC channels. I think it's important to note that WMF and the
Wikimedia community are guests on Freenode, and I'm uncomfortable with the
proposition to extend a WMF policy into IRC channels without explicit
consent from the ops of those channels; it seems to me that the TCC would
be a per-channel opt-in on IRC, not a WMF blanket standard.
Speaking more generally, I am wary of WMF encroachment into what I should
be fundamentally community-governed spaces. I have not heard a lot of
objections from the community to the proposed technical code of conduct,
and I've heard some arguments for and against the rationale for having it;
my main concern is that I would prefer that the final document be ratified
through community-led processes.
I agree that changes here should involve heavy community participation,
which is a reason I'm trying to initiate broader discussion.
We have been moderately successful in "outsourcing" real time chat to a
third-party (IRC and Freenode) in the past, but it does leave us out of
control of what may become a fundamental technology for our platform.
Certainly we could simply embed a web-based IRC client in talk pages, for
instance. That would continue the status quo. It's certainly one point in
the possible solution space, and I'm not foreclosing that. I just think we
should discuss discussions holistically. What are the benefits of
disclaiming responsibility for real time chat? What are the benefits of
the freenode conduct policy? What are the disadvantages?
We could also "more tightly integrate chat" without leaving IRC or
Freenode. For the [[en:MIT Mystery Hunt]] many teams build quite elaborate
IRC bots that layer additional functionalities on top of IRC. Matt's email
mentioned a "central reporting place". We could certainly allow IRC
channels to opt-in to a WMF code of conduct and opt-in to running a WMF bot
providing a standardized and consistent reporting mechanism/block
list/abuse logger. That's another point in the solution space.
My personal dog in the race is "tools". I totally love community-led
processes. But I am concerned that WMF is not providing the communities
adequate *tools* to make meaningful improvements in their social
environments. Twitter rolled out a new suite of anti-abuse features this
week (
https://9to5mac.com/2016/11/15/twitter-online-abuse-mute-features/)
so sadly the WMF platform is now behind twitter in terms of providing a
healthy working environment for our contributors. We need to step up our
game. As you note, the first step is this discussion involving the
community to take a broad look at discussions on our platform and determine
some basic social principles as well as architectural planks and
commonalities. Hopefully we can then follow that up with an aggressive
development effort to deploy some new tools and features. I believe this
will be an iterative process: our first tools will fall short, and we'll
need to continue "discussing discussions", revisiting assumptions, and
building improved tools.
But we can't allow ourselves to stand still.
--scott
--
(
http://cscott.net)