Are TikTok algorithms changing the way people talk about suicide?

Raphael Elias | Getty Images

Kayla Williams has never said the word “suicide” on TikTok, even though she uses the platform to discuss mental health issues with her 80,000 followers. Since the start of the pandemic, the 26-year-old student from Berkshire, England, has posted several videos about his suicidal thoughts and his stay in a psychiatric ward. Some of these clips are lighthearted, others much more serious. Yet Williams doesn’t say the word “suicide” in front of her front camera, or type it into her captions, lest the TikTok algorithm censor or remove her content. Instead, she uses the word “inviting”.

The #unalivemePlease hashtag has 9.2 million views on TikTok; #unvivant has 6.6 million people; #unaliveawareness has another 2.2 million. Although #suicideprevention is a frequently used tag on the app, the hashtags #suicide and #suicideawareness do not exist. If you search for them, TikTok displays the number of a local crisis hotline. It is a well-intentioned policy, initiated in September 2021a year after a graphic video of a suicide distributed over the application. But users have also come to fear elusive content moderation filters that seem to remove or delete videos discussing death, suicide or self-harm.

While the word “non-living” first became popular in 2013 (when it was used in an episode of Ultimate Spider-Man), Google searches for the term have has increased dramatically in 2022. From TikTok, “unalive” spread to Twitter and Reddit; YouTubers also use it so that their content is not demonetized. Depending on the context, the word can refer to suicide, murder or death. While “not alive” is often used comically on TikTok, people like Williams also use it to speak out, build community, and flag resources on the app. The rapid rise of the “non-living” therefore raises a disturbing question: what happens when we do not openly say “suicide”?

“I think it’s kind of a joke on such a serious subject,” Williams says of the term. Although she likes to say “not alive” when she intentionally wants to make the videos “less heavy,” she adds, “I don’t like that because we should be able to talk about the heavy stuff without being censored.”

Williams fears that the word “inviting” reinforces the stigma around suicide. “I think even though the word is great for preventing TikTok from deleting videos, it means the word ‘suicide’ is still considered taboo and a difficult subject to talk about,” she says. She also replaces other mental health terms so her videos aren’t automatically flagged for review: “eating disorder” becomes “ED”, “self-harm” becomes “SH”, “depression” becomes “d3pression”. (Other site users use tags like #SewerSlidel and #selfh_rm).

Prianka Padmanathan is a clinical academic in psychiatry at the University of Bristol; in 2019, she conducted a study on use of language and suicide, surveying just under 3,000 people affected by suicide. Padmanathan asked participants to rate the acceptability of topic descriptors and found that ‘suicide attempt’, ‘committed suicide’, ‘died by suicide’ and ‘ended his life’ were considered as the most acceptable expressions for discussing non-life-threatening illnesses and life-threatening suicidal behavior.

A number of interviewees expressed concerns about the complete avoidance of the word “suicide”. One participant said it was ‘dangerous’ and ‘isolating’ to avoid the word, while another said: ‘My brother killed himself and my sister tried to kill herself. I don’t think we should be afraid to use the word.

“Overall, respondents indicated a preference for terms perceived as factual, clear, descriptive, commonly used, non-emotional, non-stigmatizing, respectful, and validating,” says Padmanathan. More research is needed to determine if “non-living” could potentially be stigmatizing, but she notes that the words can and do affect how we think about suicide, citing a study 2018.

Sharon D. Cole