Home Technology ‘Algospeak’ is changing our language in real time

‘Algospeak’ is changing our language in real time

Comment

“Algospeak” is becoming more common on the internet as people try to bypass content moderation filters on social media platforms like TikTok, YouTube, Instagram and Twitch.

Algospeak refers to code words or turns of phrase that users have adopted in an effort to create a brand-safe lexicon that will prevent their posts from being removed or downgraded by content moderation systems. For example, in many online videos, it is common to say “I do not live” rather than “dead”, “SA” instead of “sexual assault” or “spicy eggplant” instead of “vibrator”.

As the pandemic has pushed more and more people to communicate and express themselves online, algorithmic content moderation systems have had an unprecedented impact on the words we choose, particularly on TikTok, and have given rise to a new form of internet-driven Aesopian language.

Unlike other traditional social platforms, the primary way content is distributed on TikTok is through an algorithmically curated “For You” page; having followers does not guarantee that people will see your content. This change has resulted in average users fitting their videos primarily to the algorithm, rather than a following, meaning that complying with content moderation rules is more crucial than ever.

When the pandemic hit, people on TikTok and other apps started calling her “Backstreet Boys reunion tour” or calling it “panini” or “panda express” as platforms have downgraded videos that mention the pandemic by name in an effort to combat misinformation. When young people started discussing tackling mental health, they talked about “becoming non-aliveto have frank conversations about suicide without algorithmic punishment. Sex workers, who have long been censored by moderation systems, define themselves on TikTok as “accountants” and uses the corn emoji as a substitute for the word “porn”.

As discussions of big events are filtered through algorithmic content delivery systems, more and more users are bending their language. When discussing the invasion of Ukraine recently, people on YouTube and TikTok used the sunflower emoji to refer to the country. When encouraging fans to follow them elsewhere, users will say “blink in lio” for “link in bio.”

Euphemisms are especially common in radicalized or harmful communities. Pro-anorexia eating disorder communities have long adopted variations on moderate words to circumvent restrictions. One paper of the School of Interactive Computing, Georgia Institute of Technology, found that the complexity of such variants even increased over time. Last year, anti-vaccine groups on Facebook started changing their names at “dance parties” or “dinner parties” and anti-vaccine influencers on Instagram have used similar code words, referring to vaccinated people as “swimmers.”

Customizing language to avoid scrutiny predates the Internet. Many religions have avoided saying the devil’s name so as not to evoke him, while people living in repressive regimes have developed code words to discuss taboo topics.

Early Internet users used alternate spelling or “leetspeak” to get around word filters in chat rooms, pictures, online games, and forums. But algorithmic content moderation systems are more pervasive in the modern internet and often end up silencing marginalized communities and important discussions.

During the YouTube “apocalypse” in 2017, when advertisers withdrew their money from the platform for fear of unsafe contentLGBTQ creators spoken have videos demonetized for saying the word “gay”. Some have started using the word less or substituting others to keep their content monetized. More recently, users on TikTok have taken to saying “cornucopia” rather than “homophobia” or claiming to be members of the “leg booty” community to indicate that they are LGBTQ.

“There’s a line that we have to walk, it’s a never-ending battle to say something and try to get the message across without saying it directly,” said Sean Szolek-VanValkenburgh, a TikTok creator with over 1.2 million followers. “It disproportionately affects the LGBTQIA community and the BIPOC community because we’re the people who create that verbiage and make up the talks.”

Conversations about women’s health, pregnancy and menstrual cycles on TikTok are also consistently down, said Kathryn Cross, a 23-year-old content creator and founder of Anja Health, a start-up that offers cord blood banking. . She replaces the words for “sex,” “period,” and “vagina” with other words or writes them with symbols in captions. Many users say “nip no” rather than “nipples”.

“It makes me feel like I need a disclaimer because I feel like it makes you look unprofessional to have these words written weirdly in your captions,” she said, “especially for content that’s supposed to be serious and medically inclined.” “.

Because online algorithms often flag content that mentions certain words without context, some users avoid saying them altogether, simply because they have alternate meanings. “You have to say ‘saltines’ when you’re literally talking about crackers now,” said Lodane Erisian, a community manager for Twitch Creators (Twitch consider the word “cracker” an insult). Twitch and other platforms have even gone so far as to remove certain emotes because people used them to communicate certain words.

Black and trans users, and those from other marginalized communities, often use the algospeak to discuss the oppression they face, mistaking words for “white” or “racist.” Some are too nervous to say the word “white” and simply hold their palm towards the camera to point out white people.

“The reality is that tech companies have been using automated tools to moderate content for a long time, and while it’s touted as sophisticated machine learning, it’s often just a list of words they deem problematic,” said Ángel Díaz, a professor at the UCLA School of Law studying technology and racial discrimination.

In January, Kendra Calhoun, a postdoctoral researcher in linguistic anthropology at UCLA, and Alexia Fawcett, a doctoral student in linguistics at UC Santa Barbara, gave a presentation on language on TikTok. They outlined how, by self-censoring words in TikToks captions, new algospeak codewords emerged.

TikTok users are now using the phrase “le dollar bean” instead of “lesbian” because that’s how TikTok’s text-to-speech feature pronounces “Le $bian,” a censored way of spelling “lesbian” that users think may evade content moderation.

Evan Greer, director of Fight for the Future, a nonprofit digital rights advocacy group, said trying to trample specific words on platforms is a fool’s errand.

“One, it really doesn’t work,” he said. “People who use platforms to organize real damages are pretty good at figuring out how to get around these systems. And two, it leads to collateral damage of literal speech. Attempting to regulate human speech on a scale of billions of people in dozens of different languages ​​and trying to grapple with things like humor, sarcasm, local context and slang can’t be done simply by downgrading a few words, Greer argues.

“I feel this is a good example of why aggressive moderation will never be a real solution to the harms we see from the business practices of big tech companies,” he said. “You can see how slippery this slope is. Over the years we have seen an increasingly misleading demand from the general public for platforms to remove more content quickly, no matter the cost.”

The creators of Big TikTok have created shared Google Docs with lists of hundreds of words that they believe are problematic for the app’s moderation systems. Other users keep up-to-date counts of terms they think restricted some videos, trying to decode the system.

Zuck took me for”, a site created by a meme account administrator who goes by the name Ana, is a place where creators can upload nonsensical content that has been banned by Instagram’s moderation algorithms. In a manifesto about her project, she wrote, “Creative freedom is one of the only silver linings of this flaming online hell we all exist in… As algorithms shrink, it’s independent creators who suffer.”

It also describes how to talk online in a way that bypasses filters. “If you violated the terms of service you may not be able to use profanity or negative words like ‘hate’, ‘kill’, ‘ugly’, ‘stupid’, etc.,” she said. “I often write, ‘I opposite of love xyz’ instead of ‘I hate xyz.’”

The Association of Creators Online, a labor advocacy group, also released a list of requests, asking TikTok for more transparency in how it moderates content. “People need to tone down their language to avoid offending these all-seeing, all-knowing TikTok gods,” said Cecelia Gray, creator of TikTok and co-founder of the organization.

TikTok offers an online resource center for creators looking to to know more on its recommender systems and has opened more transparency and accountability centers where guests can learn how the app’s algorithm works.

Vince Lynch, CEO of IV.AI, an AI platform for understanding language, said that in some countries where restraint is heavier, people end up building new dialects to communicate. “It becomes real secondary languages,” she said.

But as the algo language becomes more popular and replacement words turn into common slang, users find they have to get more and more creative to evade the filters. “It turns into a game of whack the mole,” said Gretchen McCulloch, linguist and author of “Why the Internet”, a book on how the Internet has shaped language. As platforms begin to notice people saying “seggs” instead of “sex,” for example, some users report that they feel replacement words are being flagged as well.

“We end up creating new ways of speaking to avoid this kind of restraint,” said Díaz of the UCLA School of Law, “then we end up embracing some of these words and they become common vernacular. It all comes from this effort to resist moderation”.

This does not mean that all efforts to eliminate misconduct, harassment, abuse and misinformation are in vain. But Greer says it’s the root problems that need to be prioritized. “Aggressive moderation will never be a real solution to the harms we see from the business practices of big tech companies,” she said. “This is a task for policy makers and for building better things, better tools, better protocols and better platforms.”

Ultimately, he added, “you’ll never be able to sanitize the Internet.”