I was describing to someone how Wikipedia works:
"anyone can edit" etc.
He answered with this argument:
"Wikipedia is the triumph of the average person!
of the man in the street!)"
(average meaning: not good, not bad, just OK)
I asked "why?"
His explanation:
"Great brilliant works are built by individuals.
Groups of people can only create average works.
If someone writes something good in the wiki,
other average persons will intervene with his/her
work and turn it into an average work. If someone
writes something bad in the wiki, the others will
again turn it into something of average value.
with your system (meaning: Wikipedia's system)
you can be sure that you will never create
something too bad but also never something too
good. You can create only average articles."
The idea behind his argument was that Wikipedia
will be a good resource as long as it attracts
good cotnributors. but it will soon become an
average site/encyclopaedia because it allows
anyone to join the project and edit, and most
people are just average persons and not brilliant
writers.
Do you think it's true? and how can we answer
this argument?
--Optim
__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free web site building tool. Try it!
http://webhosting.yahoo.com/ps/sb/
On Sunday 28 July 2002 03:00 am, The Cunctator wrote:
> What are the articles this person has been changing?
For 66.108.155.126:
20:08 Jul 27, 2002 Computer
20:07 Jul 27, 2002 Exploit
20:07 Jul 27, 2002 AOL
20:05 Jul 27, 2002 Hacker
20:05 Jul 27, 2002 Leet
20:03 Jul 27, 2002 Root
20:02 Jul 27, 2002 Hacker
19:59 Jul 27, 2002 Hacker
19:58 Jul 27, 2002 Hacker
19:54 Jul 27, 2002 Principle of least astonishment
19:54 Jul 27, 2002 Hacker
19:52 Jul 27, 2002 Trance music
19:51 Jul 27, 2002 Trance music
For 208.24.115.6:
20:20 Jul 27, 2002 Hacker
For 141.157.232.26:
20:19 Jul 27, 2002 Hacker
Most of these were complete replacements with discoherent statements.
Such as "TAP IS THE ABSOLUTE DEFINITION OF THE NOUN HACKER" for Hacker.
For the specifics follow http://www.wikipedia.com/wiki/Special:Ipblocklist
and look at the contribs.
--mav
Forwarding.
Pine
( https://meta.wikimedia.org/wiki/User:Pine )
---------- Forwarded message ---------
From: Amir E. Aharoni <amir.aharoni(a)mail.huji.ac.il>
Date: Mon, May 25, 2020 at 7:22 PM
Subject: [Wikimedia-l] Language Showcase, May 2020
To: wikimedia-l <wikimedia-l(a)lists.wikimedia.org>
Hello,
This is an announcement about a new installment of the Language Showcase, a
series of presentations about various aspects of language diversity and its
connection to Wikimedia Projects.
This new installment will deal with the latest design research about the
upcoming section translation feature for Content Translation.
This session is going to be broadcast over Zoom, and a recording will be
published for later viewing. You can also participate in the conversation
on IRC or with us on the Zoom meeting.
Please read below for the event details, including local time, joining
links and do let us know if you have any questions.
Thank you!
Amir
== Details ==
# Event: Language Showcase #5
# When: May 27, 2020 (Wednesday) at 13:00 UTC (check local time
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200527T1300 )
# Where:
Join Zoom Meeting
https://wikimedia.zoom.us/j/97081030000
Meeting ID: 970 8103 0000
IRC - #wikimedia-office (on Freenode)
# Agenda:
The latest design research about the upcoming section translation feature
for Content Translation.
--
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
http://aharoni.wordpress.com
“We're living in pieces,
I want to live in peace.” – T. Moore
_______________________________________________
Wikimedia-l mailing list, guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
https://meta.wikimedia.org/wiki/Wikimedia-l
New messages to: Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
Forwarding.
Pine
( https://meta.wikimedia.org/wiki/User:Pine )
---------- Forwarded message ---------
From: Janna Layton <jlayton(a)wikimedia.org>
Date: Fri, May 15, 2020 at 8:05 PM
Subject: [Wiki-research-l] [Wikimedia Research Showcase] May 20, 2020:
Human in the Loop Machine Learning
To: <analytics(a)lists.wikimedia.org>,
<wikimedia-l(a)lists.wikimedia.org>,
<wiki-research-l(a)lists.wikimedia.org>
Hi all,
The next Research Showcase will be live-streamed on Wednesday, May 20, at
9:30 AM PDT/16:30 UTC.
This month we will learn about recent research on machine learning systems
that rely on human supervision for their learning and optimization -- a
research area commonly referred to as Human-in-the-Loop ML. In the first
talk, Jie Yang will present a computational framework that relies on
crowdsourcing to identify influencers in Social Networks (Twitter) by
selectively obtaining labeled data. In the second talk, Estelle Smith will
discuss the role of the community in maintaining ORES, the machine learning
system that predicts the quality in Wikipedia applications.
YouTube stream: https://www.youtube.com/watch?v=8nDiu2ebdOI
As usual, you can join the conversation on IRC at #wikimedia-research. You
can also watch our past research showcases here:
https://www.mediawiki.org/wiki/Wikimedia_Research/Showcase
This month's presentations:
*OpenCrowd: A Human-AI Collaborative Approach for Finding Social
Influencers via Open-Ended Answers Aggregation*
By: Jie Yang, Amazon (current), Delft University of Technology (starting
soon)
Finding social influencers is a fundamental task in many online
applications ranging from brand marketing to opinion mining. Existing
methods heavily rely on the availability of expert labels, whose collection
is usually a laborious process even for domain experts. Using open-ended
questions, crowdsourcing provides a cost-effective way to find a large
number of social influencers in a short time. Individual crowd workers,
however, only possess fragmented knowledge that is often of low quality. To
tackle those issues, we present OpenCrowd, a unified Bayesian framework
that seamlessly incorporates machine learning and crowdsourcing for
effectively finding social influencers. To infer a set of influencers,
OpenCrowd bootstraps the learning process using a small number of expert
labels and then jointly learns a feature-based answer quality model and the
reliability of the workers. Model parameters and worker reliability are
updated iteratively, allowing their learning processes to benefit from each
other until an agreement on the quality of the answers is reached. We
derive a principled optimization algorithm based on variational inference
with efficient updating rules for learning OpenCrowd parameters.
Experimental results on finding social influencers in different domains
show that our approach substantially improves the state of the art by 11.5%
AUC. Moreover, we empirically show that our approach is particularly useful
in finding micro-influencers, who are very directly engaged with smaller
audiences.
Paper: https://dl.acm.org/doi/fullHtml/10.1145/3366423.3380254
*Keeping Community in the Machine-Learning Loop*
By: C. Estelle Smith, MS, PhD Candidate, GroupLens Research Lab at the
University of Minnesota
On Wikipedia, sophisticated algorithmic tools are used to assess the
quality of edits and take corrective actions. However, algorithms can fail
to solve the problems they were designed for if they conflict with the
values of communities who use them. In this study, we take a
Value-Sensitive Algorithm Design approach to understanding a
community-created and -maintained machine learning-based algorithm called
the Objective Revision Evaluation System (ORES)—a quality prediction system
used in numerous Wikipedia applications and contexts. Five major values
converged across stakeholder groups that ORES (and its dependent
applications) should: (1) reduce the effort of community maintenance, (2)
maintain human judgement as the final authority, (3) support differing
peoples’ differing workflows, (4) encourage positive engagement with
diverse editor groups, and (5) establish trustworthiness of people and
algorithms within the community. We reveal tensions between these values
and discuss implications for future research to improve algorithms like
ORES.
Paper:
https://commons.wikimedia.org/wiki/File:Keeping_Community_in_the_Loop-_Unde…
--
Janna Layton (she, her)
Administrative Assistant - Product & Technology
Wikimedia Foundation <https://wikimediafoundation.org/>
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l