Search

Home Governance and Human Rights Transparency mechanisms for social media recommender algorithms

Governance and Human Rights

Transparency mechanisms for social media recommender algorithms

Is this project of interest to you ? Get involved today.

Collaborate

Objectives

The project aims to pilot a social media transparency mechanism to test whether recommender systems may lead users toward harmful content of various kinds and surface information to the public.

This concern is, at its origin, a technical one, relating to the AI methods through which recommender systems learn, but it is also a social and political one due to the effects of recommender systems on platform users and their potential to have a significant influence on currents of political opinion.

It is vital that governments and the public have a way to access more information about the impact of recommender systems on platform users.

The project proposes a fact-finding exercise that would allow external researchers to test concerning claims in social media companies, as a practical mechanism to increase platform transparency.

In the first phase of our project, we reviewed possible methods for studying the effects of recommender systems on user platform behaviour and concluded that the best methods are the ones that companies use themselves and that are only available internally. We proposed a transparency mechanism, in which external researchers are embedded in a social media company and use internal methods to address questions in the public interest, about the possible harmful effects of recommender systems. We focused on the domain of Terrorist and Violent Extremist Content (TVEC). Over the past year, our project has pursued the goal of piloting our proposed fact-finding study in one or more social media companies, which manifested some challenges. This year, we will be working with EU bodies to identify ways to support the operationalization of the Digital Service Act, focusing on audit and transparency requirements.

This fact-finding exercise could help surface relevant information about recommender systems’ harms, without compromising the rights of platform users, or the intellectual property of companies.

Team

Lama Saouma

CEIMIA

AI Initiatives Lead

Ali Knott

Victoria University of Wellington

GPAI Expert and project co-lead

Dino Pedreschi

University of Pisa

GPAI Expert and project co-lead

Working Committee

Raja Chatila

Tapabrata Chakraborti

David Eyers

Andrew Trotman

Ricardo Baeza-Yates

The project has involved discussions with several companies (e.g. Twitter, Youtube, Facebook), and government groups (e.g. New Zealand, UK, Canada, France); and participation in several international initiatives relating to TVEC, in particular the Christchurch Call and GIFCT.