Interesting; thanks for sharing.

Somewhat off-topic here, but some of you may be interested in Wikipedian Amanda Levandowski's new paper on AI and copyright:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3024938

Abstract:

As the use of artificial intelligence (AI) continues to spread, we have seen an increase in examples of AI systems reflecting or exacerbating societal bias, from racist facial recognition to sexist natural language processing. These biases threaten to overshadow AI’s technological gains and potential benefits. While legal and computer science scholars have analyzed many sources of bias, including the unexamined assumptions of its often-homogenous creators, flawed algorithms, and incomplete datasets, the role of the law itself has been largely ignored. Yet just as code and culture play significant roles in how AI agents learn about and act in the world, so too do the laws that govern them. This Article is the first to examine perhaps the most powerful law impacting AI bias: copyright. 

Artificial intelligence often learns to “think” by reading, viewing, and listening to copies of human works. This Article first explores the problem of bias through the lens of copyright doctrine, looking at how the law’s exclusion of access to certain copyrighted source materials may create or promote biased AI systems. Copyright law limits bias mitigation techniques, such as testing AI through reverse engineering, algorithmic accountability processes, and competing to convert customers. The rules of copyright law also privilege access to certain works over others, encouraging AI creators to use easily available, legally low-risk sources of data for teaching AI, even when those data are demonstrably biased. Second, it examines how a different part of copyright law — the fair use doctrine — has traditionally been used to address similar concerns in other technological fields, and asks whether it is equally capable of addressing them in the field of AI bias. The Article ultimately concludes that it is, in large part because the normative values embedded within traditional fair use ultimately align with the goals of mitigating AI bias and, quite literally, creating fairer AI systems.

Keywords: copyright, competition, fair use, artificial intelligence, machine learning, algorithms, bias


On Mon, Oct 16, 2017 at 9:11 AM Owen Blacker <owen@blacker.me.uk> wrote:
https://www.gov.uk/government/publications/growing-the-artificial-intelligence-industry-in-the-uk/executive-summary

May be relevant to our interests.

To continue developing and applying AI, the UK will need to increase ease of access to data in a wider range of sectors. This Review recommends:

  • Development of data trusts, to improve trust and ease around sharing data
  • Making more research data machine readable
  • Supporting text and data mining as a standard and essential tool for research.
_______________________________________________
Publicpolicy mailing list
Publicpolicy@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/publicpolicy