Can TikTok’s Algorithm Changes Stop the Spread of Harmful Content?

Can TikTok’s Algorithm Changes Stop the Spread of Harmful Content?

The Wrap

Published

As the fastest-growing social media platform, TikTok is under scrutiny to keep its algorithm from turning into another Facebook, vilified by everyone from government to parents’ groups to democracy advocates for driving harmful content.

But while the newly ascendant video service has pledged to rework its secret viral recommendation engine, it is facing challenges – from users finding their own workarounds to skepticism in Congress and technology experts. As new research has found that TikTok is driving misinformation in posts about drugs, violence and anti-vaccination, this has led to a lack of trust among many that TikTok really intends to veer from its path of massive growth.

“Unfortunately, just like Facebook, Instagram, and other social networks, in practice, this means that the most shocking and emotionally jarring content is the stuff that’s served most often,” Jon Brodsky, CEO at social network YouNow, told TheWrap. “This is the same story we’ve seen several times before, and unfortunately, it’s almost impossible to get people emotionally invested in less shocking content. The only difference between TikTok and Facebook is the speed at which people are being served jarring content.”

Baruch Labunski, founder at SEO firm Rank Secure, agreed. “It’s hard to believe that TikTok has suddenly developed a conscience and cares about the health and wellbeing of its users,” he said. “Everything in their algorithm is designed to bring users back and keep them on the platform longer.”

For one, experts agree that this experiment seems more like the company’s PR maneuver aimed at appeasing regulators and critics in response to recent scrutiny and less a move that will actually help protect users from addictive, harmful content.

Researchers and lawmakers have criticized the app for sending users into a spiral of harmful or negative content, including content on self-harm, depression and eating disorders. Keeping users engaged on such content could impact their mental health. The Mental Health Foundation says that overusing social media is associated with negative effects including greater loneliness, loss of self-worth and worsened anxiety or depression.

Moreover, users can simply skirt the changes by using altered spellings of the harmful or negative content in question. It’s a workaround all the social media platforms deal with in content moderation — underage users can enter a false age, and targeted groups can keep changing their names or hashtags as the rules evolve.

“We’re currently testing ways to further diversify recommendations, while the tool that will let people choose words or hashtags associated with content they don’t want to see in their For You feed will be testing in the near future,” company spokesperson Jamie Favazza told TheWrap.

TikTok did not respond to requests for further comment.

(TikTok)

Part of what makes TikTok so successful is the addictiveness driven by its algorithm, which learns a user’s interest to serve up similar content and judge their behavior based on how long they lingered or rewatched a piece of content. It works in the same way as most social media, but TikTok seems to be able to learn this feedback and incorporate it a lot faster than its peers.

Five-year-old TikTok is among the fastest-growing social apps, especially among Gen Z users. Throughout 2021, TikTok’s growth outpaced its competitors as it hit 1 billion users last September. It took Facebook around a little over eight years to gain 1 billion users, according to insights company BuyShares.

All this has increased pressure on the popular app has the company rethinking some of its features. In December, the platform said it would begin implementing tools to “diversify” the For You page, a feed that provides users with an endless stream of content based on their interests. The app has also been criticized for spreading misinformation, including anti-vaccination messages, white supremacy content as well as dangerous and violent posts involving Mexican drug cartels.

The move comes after TikTok executives testified before Congress for the first time last October in a committee panel focused on protecting children online and consumer safety. Execs of Facebook, Instagram, Snap and YouTube have also testified in recent months regarding the online safety of their products. As Instagram has recently come under fire for downplaying its research showing that its products have a negative impact on teenage users, lawmakers are increasingly putting these companies on the hot seat — but some are doubtful that this will get these businesses to change their ways.

TikTok, owned by Chinese tech company ByteDance, said it recognizes that too much of any one thing does not “fit with the diverse discovery experience we aim to create.” To combat this, TikTok said it aims to intersperse user recommendations by occasionally feeding them content that may fall outside of their preferences.

“As we continue to develop new strategies to interrupt repetitive patterns, we’re looking at how our system can better vary the kinds of content that may be recommended in a sequence. That’s why we’re testing ways to avoid recommending a series of similar content – such as around extreme dieting or fitness, sadness, or breakups – to protect against viewing too much of a content category that may be fine as a single video but problematic if viewed in clusters,” the post continued.

There has been ongoing debate whether social platforms ultimately have the social responsibility to safeguard users from the harmful effects of their products. As Will Eagle, VP of marketing and strategy of digital talent network Collab, points out, users may not always do what’s best for themselves — and tech companies also risk imploding their platforms by disregarding these content moderation issues.

“TikTok knows that there is a responsibility component that can (and should) play a role within the algorithm,” Eagle said. “Could they ignore it? Sure, but it would ultimately destroy the platform from within. The question on the table is, to what extent can the algorithm reinforce negative experiences?”

Other content, such as graphic content in a dangerous stunt or content showing use of controlled substances, are also excluded from the For You recommendation system. TikTok said it removes content that shows or promotes eating disorders, dangerous acts and challenges, and reckless driving. Additionally, users can tap “not interested” if they are served content they don’t want, a feature Eagle recommends in order to get out of a potentially negative or harmful vortex in the app.

The company said it is working with experts across medicine, clinical psychology, AI ethics and members of its Content Advisory Council as these tools get implemented. TikTok did not specify when it would introduce the tool that will allow people to choose words or hashtags associated with content they don’t want to see in their For You feed.

But many have noticed that alternating spelling and adding accent marks can allow users to evade the new precautions. Flynn Zaiger, CEO of social media agency Online Optimism, mentioned examples of users writing “anôréxîâ” instead, while “anorexic” without the marks would bring up referrals to the National Eating Disorder Association for support.

“TikTok’s attempts to diversify the feed, three weeks in, has just led to individuals to come up with new ways to continue to share similar content, no matter the danger, using different words and euphemisms to get around the algorithms,” Zaiger said.

Ultimately, Zaiger does not believe this modification will be enough to combat the proliferation of harmful content. After all, this is how social media companies make money and stay competitive. He added: “TikTok’s initial announcement felt timed to antagonize and attempt to separate themselves from the negative stories coming out about Instagram.”

Full Article