Hi Everyone,
The next Research Showcase will be live-streamed this Wednesday, Aug 17, 2016 at 11:30 AM (PST) 18:30 (UTC).
YouTube stream: http://youtu.be/rsFmqYxtt9w
As usual, you can join the conversation on IRC at #wikimedia-research. And, you can watch our past research showcases here https://www.mediawiki.org/wiki/Wikimedia_Research/Showcase#Archive.
This month's showcase includes.
Computational Fact Checking from Knowledge NetworksBy *Giovanni Luca Ciampaglia https://www.mediawiki.org/wiki/User:Junkie.dolphin*Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Fact checking is often a tedious and repetitive task and even simple automation opportunities may result in significant improvements to human fact checkers. In this talk I will describe how we are trying to approximate the complexities of human fact checking by exploring a knowledge graph under a properly defined proximity measure. Framed as a network traversal problem, this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using the public knowledge graph extracted from Wikipedia by the DBPedia project, showing that the method does indeed assign higher confidence to true statements than to false ones. One advantage of this approach is that, together with a numerical evaluation, it also provides a sequence of statements that can be easily inspected by a human fact checker.
Deploying and maintaining AI in a socio-technical system. Lessons learnedBy *Aaron Halfaker https://www.mediawiki.org/wiki/User:Halfak_(WMF)*We should exercise great caution when deploying AI into our social spaces. The algorithms that make counter-vandalism in Wikipedia orders of magnitude more efficient also have the potential to perpetuate biases and silence whole classes of contributors. This presentation will describe the system efficiency characteristics that make AI so attractive for supporting quality control activities in Wikipedia. Then, Aaron will tell two stories of how the algorithms brought new, problematic biases to quality control processes in Wikipedia and how the Revision Scoring team https://meta.wikimedia.org/wiki/R:Revision_scoring_as_a_service learned about and addressed these issues in ORES https://meta.wikimedia.org/wiki/ORES, a production-level AI service for Wikimedia Wikis. He'll also make an overdue call to action toward leveraging human-review of AIs biases in the practice of AI development.
We look forward to seeing you!
Reminder that this starts in a few minutes.
Pine
On Tue, Aug 16, 2016 at 1:50 PM, Sarah R srodlund@wikimedia.org wrote:
Hi Everyone,
The next Research Showcase will be live-streamed this Wednesday, Aug 17, 2016 at 11:30 AM (PST) 18:30 (UTC).
YouTube stream: http://youtu.be/rsFmqYxtt9w
As usual, you can join the conversation on IRC at #wikimedia-research. And, you can watch our past research showcases here https://www.mediawiki.org/wiki/Wikimedia_Research/Showcase#Archive.
This month's showcase includes.
Computational Fact Checking from Knowledge NetworksBy *Giovanni Luca Ciampaglia https://www.mediawiki.org/wiki/User:Junkie.dolphin*Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Fact checking is often a tedious and repetitive task and even simple automation opportunities may result in significant improvements to human fact checkers. In this talk I will describe how we are trying to approximate the complexities of human fact checking by exploring a knowledge graph under a properly defined proximity measure. Framed as a network traversal problem, this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using the public knowledge graph extracted from Wikipedia by the DBPedia project, showing that the method does indeed assign higher confidence to true statements than to false ones. One advantage of this approach is that, together with a numerical evaluation, it also provides a sequence of statements that can be easily inspected by a human fact checker.
Deploying and maintaining AI in a socio-technical system. Lessons learned By *Aaron Halfaker https://www.mediawiki.org/wiki/User:Halfak_(WMF)*We should exercise great caution when deploying AI into our social spaces. The algorithms that make counter-vandalism in Wikipedia orders of magnitude more efficient also have the potential to perpetuate biases and silence whole classes of contributors. This presentation will describe the system efficiency characteristics that make AI so attractive for supporting quality control activities in Wikipedia. Then, Aaron will tell two stories of how the algorithms brought new, problematic biases to quality control processes in Wikipedia and how the Revision Scoring team https://meta.wikimedia.org/wiki/R:Revision_scoring_as_a_service learned about and addressed these issues in ORES https://meta.wikimedia.org/wiki/ORES, a production-level AI service for Wikimedia Wikis. He'll also make an overdue call to action toward leveraging human-review of AIs biases in the practice of AI development.
We look forward to seeing you!
-- Sarah R. Rodlund Senior Project Coordinator-Engineering, Wikimedia Foundation srodlund@wikimedia.org
Analytics mailing list Analytics@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/analytics
wiki-research-l@lists.wikimedia.org