cover image: Artificial Intelligence, Content Moderation, and Freedom of Expression

20.500.12592/37pvprf

Artificial Intelligence, Content Moderation, and Freedom of Expression

26 Feb 2020

As governments, companies, and people around the world grapple with the challenges of hate speech, terrorist propaganda, and disinformation online, “artificial intelligence” (AI) is often proposed as a key tool for identifying and filtering out problematic content. “AI,” however, is not a simple solution or a single type of technology; in policy discussions, it has become a shorthand for an ever-changing suite of techniques for automated detection and analysis of content. Various forms of AI and automation are used in ranking and recommendation systems that curate the massive amounts of content available online. The use of these technologies raises significant questions about the influence of AI on our information environment and, ultimately, on our rights to freedom of expression and access to information. What follows is a compact position paper, a first version of which was written for the Bellagio, Italy, session of the Transatlantic Working Group (TWG), Nov. 12-16, 2019. It discusses the interface of AI/automation and freedom of expression, focusing on two main areas.
artificial intelligence freedom of expression content regulation

Authors

Emma Llansó, Joris van Hoboken, Paddy Leerssen, Jaron Harambam

Published in
Netherlands

Related Topics

All