Hey everybody,
TL;DR: I wanted to let you know about an upcoming experimental Reddit AMA
("ask me anything") chat we have planned. It will focus on artificial
intelligence on Wikipedia and how we're working to counteract vandalism
while also making life better for newcomers.
We plan to hold this chat on June 1st at 21:00 UTC/14:00 PST in the /r/iAMA
subreddit[1]. I'd love to answer any questions you have about these topics
questions, and I'll send a follow-up email to this thread shortly before
the AMA begins.
----
For those who don't know who I am, I create artificial intelligences[2]
that support the volunteers who edit Wikipedia[3]. I've been fascinated by
the ways that crowds of volunteers build massive, high quality information
resources like Wikipedia for over ten years.
For more background, I research and then design technologies that make it
easier to spot vandalism in Wikipedia—which helps support the hundreds of
thousands of editors who make productive contributions. I also think a lot
about the dynamics between communities and new users—and ways to make
communities inviting and welcoming to both long-time community members and
newcomers who may not be aware of community norms. For a quick sampling of
my work, check out my most impactful research paper about Wikipedia[3],
some recent coverage of my work from *Wired*[4], or check out the master
list of my projects on my WMF staff user page[5], the documentation for the
technology team I run[9], or the home page for Wikimedia Research[8].
This AMA, which I'm doing with with the Foundation's Communications
department, is somewhat of an experiment. The intended audience for this
chat is people who might not currently be a part of our community but have
questions about the way we work—as well as potential research collaborators
who might want to work with our data or tools. Many may be familiar with
Wikipedia but not the work we do as a community behind the scenes.
I'll be talking about the work I'm doing with the ethics of AI and how we
think about artificial intelligence on Wikipedia, and ways we’re working to
counteract vandalism on the world’s largest crowdsourced source of
knowledge—like the ORES extension[6], which you may have seen highlighting
possibly problematic edits on your watchlist and in RecentChanges.
I’d love for you to join this chat and ask questions. If you do not or
prefer not to use Reddit, we will also be taking questions on ORES'
MediaWiki talk page[7] and posting answers to both threads.
1. https://www.reddit.com/r/IAmA/
2. https://en.wikipedia.org/wiki/Artificial_intelligence
2. https://www.mediawiki.org/wiki/ORES
3.
http://www-users.cs.umn.edu/~halfak/publications/The_Rise_and_Decline/halfa…
4.
https://www.wired.com/2015/12/wikipedia-is-using-ai-to-expand-the-ranks-of-…
5. https://en.wikipedia.org/wiki/User:Halfak_(WMF)
6. https://www.mediawiki.org/wiki/Extension:ORES
7. https://www.mediawiki.org/wiki/Talk:ORES
8. https://www.mediawiki.org/wiki/Wikimedia_Research
9. https://www.mediawiki.org/wiki/Wikimedia_Scoring_Platform_team
-Aaron
Principal Research Scientist @ WMF
User:EpochFail / User:Halfak (WMF)
Hi all,
We are preparing to conduct some research into the process of how Requests
for Comments (RfCs) get discussed and closed. This work is further
described in the following Wikimedia page: https://meta.wikimedia.o
rg/wiki/Research:Discussion_summarization_and_decision_support_with_Wikum
To begin, we are planning to do a round of interviews with people who
participate in RfCs in English Wikipedia, including frequent closers,
infrequent closers, and people who participate in but don't close RfCs. We
will be asking them about how they go about closing RfCs and their opinions
on how the overall process could be improved. We are also creating a
database of all the RfCs on English Wikipedia that have gone through a
formal closure process and parsing their conversations.
While planning the interviews, we thought that the information that we
gather could be of interest to the Wikimedia community, so we wanted to
open it up and ask if there was anything you would be interested in
learning about RfCs or RfC closure from people who participate in them.
Also, if you know of existing work in this area, please let us know.
Thank you!
Amy
--
Amy X. Zhang | Ph.D. student at MIT CSAIL | http://people.csail.mit.edu/axz
| @amyxzh
Hello all,
I recently attended the 2017 Conference on Human Factors in Computing Systems (CHI) and put together a small report/reflection for Aaron Halfaker regarding some of the work that was presented there that I found interesting. If you’d like to check the report out, it can be found here: https://meta.wikimedia.org/wiki/User:Hall1467/CHI_2017_Report <https://meta.wikimedia.org/wiki/User:Hall1467/CHI_2017_Report>. CHI is a yearly human-computer interaction conference and is a common venue for studies on peer production communities such as Wikipedia.
Feel free to leave questions or comments in the talk page! Have a great rest of the week.
Andrew