The algorithms behind the spread of anti-Semitism online
THE CONVERSATION via AP — Anti-Semitic incidents have shown a sharp rise in the United States. The Anti-Defamation League, a New York-based Jewish civil rights group that has been tracking cases since 1979, found there were 2,717 incidents in 2021. That’s a 34% increase from 2020.
In Europe, the European Commission has seen a seven-fold increase in anti-Semitic posts on French-language accounts and a more than thirteen-fold increase in anti-Semitic comments on German channels during the pandemic.
Along with other researchers who study antisemitism, we began to examine how technology and the business models of social media platforms fueled antisemitism. A 2022 book we co-edited, “Antisemitism on Social Media,” offers insights from the US, Germany, Denmark, Israel, India, the UK and from Sweden on how algorithms on Facebook, Twitter, TikTok and YouTube help spread anti-Semitism.
What does anti-Semitism look like on social media?
Hatred against Jews on social media is often expressed in stereotypical portrayals of Jews that stem from Nazi propaganda or Holocaust denial.
Anti-Semitic posts on social media also express hatred towards Jews based on the notion that all Jews are Zionists – that is, they are part of the national movement supporting Israel as a Jewish state – and Zionism is constructed as an innate evil.
However, anti-Semitism today is not only directed against Israelis, and it does not always take the form of traditional slogans or hate speech.
Contemporary antisemitism manifests itself in various forms such as GIFs, memes, vlogs, comments, and reactions such as likes and dislikes on platforms.
Researcher Sophie Schmalenberger has found that anti-Semitism is expressed not only in harsh and hurtful language and images on social media, but also in coded forms that can easily go unnoticed. For example, on Facebook, Germany’s far-right Alternative für Deutschland, or AfD, omits mention of the Holocaust in posts about World War II. It also uses anti-Semitic language and rhetoric that presents anti-Semitism as acceptable.
Antisemitism can take subtle forms like in emojis. The emoji combination of a Star of David, a Jewish symbol, and a rat resembles Nazi propaganda equating Jews with vermin. In Nazi Germany, the constant repetition and normalization of such depictions led to the dehumanization of Jews and ultimately the acceptance of genocide.
Other forms of anti-Semitism on social media are anti-Semitic troll attacks. Users organize to disrupt online events by inundating them with messages that deny the Holocaust or spread conspiracy myths, as QAnon does.
Academics Gabi Weimann and Natalie Masri have studied TikTok. They found that children and young adults are particularly at risk of being exposed, often unwittingly, to anti-Semitism on the hugely popular and growing platform, which already has more than a billion users worldwide. world.
Some of the published content combines snippets of footage from Nazi Germany with new text disparaging or mocking Holocaust victims.
Continued exposure to anti-Semitic content at a young age, researchers say, can lead to both normalization of content and radicalization of the Tik-Tok viewer.
Anti-Semitism is fueled by algorithms, which are programmed to register engagement. This ensures that the more engagement a post receives, the more users see it. Engagement includes all reactions such as likes and dislikes, shares and comments, including counter-comments. The problem is that reactions to posts also trigger rewarding dopamine hits in users.
Because outrageous content creates the most engagement, users feel more encouraged to post hateful content.
However, even social media users who post critical comments on hateful content do not realize that due to how the algorithms work, they end up contributing to its spread.
Research on video recommendations on YouTube also shows how algorithms gradually lead users to more radical content. Algorithmic anti-Semitism is therefore a form of what criminologist Matthew Williams calls “algorithmic hate” in his book “The Science of Hate”.
What can we do there?
To combat anti-Semitism on social media, strategies must be evidence-based. But neither social media companies nor researchers have so far devoted enough time and resources to this question.
Studying antisemitism on social media poses unique challenges for researchers: they must have access to data and funding to be able to help develop effective counter-strategies. Until now, researchers have depended on the cooperation of social media companies to access the data, which is mostly unregulated.
Social media companies have implemented guidelines on reporting antisemitism on social media, and civil society organizations have demanded action against algorithmic antisemitism. However, the measures taken so far are woefully insufficient, even dangerous. For example, counter-speech, which is often presented as a possible strategy, tends to amplify hateful content.
To effectively combat anti-Semitic hate speech, social media companies should change the algorithms that collect and store user data for ad agencies, which make up a large part of their revenue.
There is a borderless, global spread of anti-Semitic social media posts that is happening on an unprecedented scale. We believe it will take the collective efforts of social media companies, researchers and civil society to tackle this problem.