Dr. Heather Ford wrote:
... You may want to read Angele Christin's paper that just came out in Big Data and Society that complicates the notion of judges accepting algorithmic reasoning wholesale in making decisions.
http://journals.sagepub.com/eprint/SPgDYyisV8mAJn4fm7Xi/full
I am in Australia right now, working today to save its would-be immigrants from stupid robot AI pronunciation assessments:
https://www.theguardian.com/australia-news/2017/aug/08/computer-says-no-iris...
http://www.smh.com.au/technology/technology-news/australian-exnews-reader-wi...
I clearly remember the day in 1996 when the guy who since wrote the Pearson Spoken English test rejected my attempts at accent adaptation.
The fight isn't against robots, it's against their lazy creators.
Best regards, James
... what does this post have to do with wikis?
FRSbot is a very prominent bot on Wikipedia crucial to obtaining neutral feedback for less-prominent RFCs, but it doesn't work the way people think it does, or the way it's authors have implied it does, or the way it should if it was going to be neutral.
Take a look at its code and see how it distributes requests. They aren't automated, just automatically prepared for a completely obscured step requiring manual intervention which, in my opinion, gives the person doing that manual step a whole lot more power over the controversies in the encyclopedia than any other role.
Who actually does that manual distribution step? Legotkm or James Hare?
wiki-research-l@lists.wikimedia.org