A study proposing better algorithms to avoid radicalization won the prize for the best article

The May web conference awarded the Best Paper 2022 award to “Rewiring what-to-watch-next Recommendations to Reduce Radicalization Pathways”, authored by Francesco Fabbri, Yanhao Wang, Francesco Bonchi, Carlos Castillo and Michael Mathioudakis.

– The two main authors of the article also worked at our university: the first author Fabbri as an intern and the second author Wang as a postdoctoral fellow, explains Mathioudakis, who works as an associate professor in the computer science department.

The web conference is a leading international research conference, one of the few in computer science with the Jufo 2 ranking for prestigious content in scientific publishing.

Algorithm to strengthen verified content recommendations

Web platforms, such as YouTube, generally recommend continuing to watch videos similar to those the user has recently viewed.

– If the user receives and follows a series of recommendations with similar content, they may not encounter content that represents a different point of view. As a hypothetical example, suppose a user watches a video on an online video platform and the video promotes a conspiracy theory about the war in Ukraine, Mathioudakis explains.

– If the platform subsequently only recommends similar war conspiracy videos, the user may not be informed about official or verified reports but end up watching a series of conspiracy videos.

Conspiratorial content and extremism are a growing concern in Western countries which led, for example, to the storming of the Capitol on January 6 in the United States. Finding ways to de-radicalize social media content is a pressing issue for the European Union, as well as for social media platforms.

– In our work, we develop algorithms that video and other web platforms could use to make minimal changes to the recommendations they provide to their users, so that users are not “stuck” watching content of dubious quality (like extremist content or contains misinformation), says Mathioudakis.

For the paper, the team used simulations on a public dataset of YouTube recommendations to study the effectiveness of the proposed algorithms.

– Essentially, our algorithms aim to choose a small number of existing recommendations and replace them with others chosen appropriately, so that a user who uses the recommendations to browse content on the platform has enough options to move away from questionable content, says Mathioudakis. .

The algorithm would allow the user to better navigate the content of a platform or go to other places as well. The incentive would be to offer higher quality videos.

– It’s not a magic bullet that fixes everything, but it’s a way to prevent users from getting stuck on questionable quality content.

Sharon D. Cole