close
close

How Meta, TikTok, YouTube and X moderate election threats

How Meta, TikTok, YouTube and X moderate election threats

Four years after major social media platforms took unprecedented steps to crack down on fakes in the wake of the 2020 presidential election, their policies have loosened and their vigilance has faded, as The Washington’s Naomi Nix and Cat Zakrzewski report Post. Meanwhile, the “Stop the Steal” movement that started on the far right edge has evolved into a coordinated and sophisticated effort, as Nix, Zakrzewski and Drew Harwell reported Monday.

Heading into today’s election, the most influential social networks — including Facebook, Meta’s Instagram and Threads, Google’s YouTube, ByteDance’s TikTok and Elon Musk’s X — have policies and plans in place to moderate election threats and disinformation. Some categories of speech remain prohibited at all, including posts that mislead voters about how to vote, who can vote, or the laws and procedures surrounding elections. All four companies also prohibit posts that seek to intimidate voters, which is against the law.

But no two take exactly the same approach, creating a confusing information landscape where the same claim could be allowed by one platform and restricted by another. To understand the main similarities and differences, Tech Brief asked representatives from Meta, TikTok, YouTube and X to clarify their rules and their plans to enforce them.

Meta

Meta’s approach stands out in how it handles the “big lie” — the idea that widespread voter fraud cost Donald Trump the 2020 election and threatens to win Trump in 2024. Meta allows political advertisers to claim that the 2020 election was rigged, but it bans ads that question the legitimacy of the upcoming election. Users and politicians can make claims of voter fraud in their posts, but the company can take action if they allege conspiracies that have been debunked by fact-checkers, include calls for violence or otherwise harass poll workers.

The social media giant has also changed its approach to political content on its social networks, highlighting overtly political posts and accounts in its recommendations. And Meta has done less than in previous cycles to promote accurate information about the voting process on its Facebook voter information center.

Other aspects of Meta’s approach to controlling electoral issues are more mysterious. In 2020, Meta, then called Facebook, said it would add a tag to any posts by candidates who say they won before election results are in, instead directing users to results from Reuters and the National Election Pool. Since then, Meta has said it will only apply the tags it deems necessary and declined to elaborate on how it will handle premature victory declarations.

TikTok

TikTok’s future in the United States is uncertain as it challenges a law requiring its sale or ban. However, it played a bigger role in the 2024 election than in any previous cycle. It remains the only one of the four platforms to ban political advertising (although some sneak in anyway). But it has become a battleground for a murkier political influence campaign, with creators taking money from political action committees and dark money groups to support a candidate to their followers — often without disclosing it.

The company’s election and civic integrity policies prohibit misinformation about election procedures or election results, including claims that Trump won in 2020. Regarding premature claims of victory, unverified claims of voting irregularities, or other potentially misleading election content, TikTok says that they may be ineligible for promotion in user feeds until reviewed by the company’s independent fact-checking partners. A January report from NYU’s Stern Center for Business and Human Rights found that TikTok’s “tough-sounding policies” were undermined, however, by “haphazard enforcement” that “failed to slow the spread of deniers’ lies.”

Google/YouTube

YouTube announced last year that it would stop removing videos claiming the 2020 election was rigged — a policy the company first adopted as the Stop the Steal movement gained traction. While YouTube has reversed course in an effort to preserve political speech, the company says it still removes content that encourages others to “interfere with democratic processes.” This could mean videos instructing viewers to hack government websites to delay the release of results, intentionally creating long lines at a polling place to make voting difficult, or calling for incitement to violence against election officials.

The search giant has some specific plans for Election Day. Google will put a link to its real-time election results tracker at the top of search results for campaign-related queries. The results tracker, which relies on the Associated Press for information, will also be added to the election videos. After the last polls close on Nov. 5, Google will also pause all ads related to the US election — a break the company says could last several weeks.

X

Under Musk, X has narrowed the categories of speech that violate its rules, reduced the resources it devotes to enforcing them, and reduced the penalties for violating them. At the same time, Musk emerged as a major Trump supporter and highlighted alleged cases of voter fraud.

X told Tech Brief that the company will enforce its civic integrity policy, which prohibits voter suppression and intimidation, false claims about election processes, and incitement to real-world violence. X will also enforce its policies on platform manipulation, synthetic media, deceptive account identities and violent content. It will “grow authoritative voting information” on users’ home timelines and through search queries, and encourage users to add context to each other’s posts using the site’s fact-checking feature, Community Notes.

However, these policies are the most minimal of the major platforms. Posts that violate the civic integrity guidelines will be labeled as misleading but not removed, and the policy gives no indication that user accounts will face consequences. X also does not work with independent fact-checking organizations to flag or flag false claims.

Musk encourages posts that question the integrity of the election. Last week, he invited his 200 million followers to join a special hub on the site, run by his pro-Trump lobby group America PAC, dedicated to sharing “potential incidents of voter fraud and irregularities,” many of they have been proven to be false or unfounded.

Some experts fear that the platforms’ policies will fail to contain an expected wave of fraud on Election Day and beyond.

In a guide to platforms’ post-election policies by the nonprofit Tech Policy Press, a group of researchers and analysts noted that there have been some “commendable” changes since 2020, with several platforms now banning threats against election workers. However, they said the loosened bans on disinformation on Facebook, YouTube and X were “worrying” because they “may serve to delegitimize election results and contribute to violence”.

Katie Harbath, CEO of tech consultancy Anchor Change and former director of public policy at Facebook, has been tracking the election plans of platforms, including those of smaller platforms and messaging apps like Telegram and Discord, for months. She predicted that no single company will have a tremendous impact on the election because the information environment has become so fragmented.

“I’m most concerned about those platforms, like X, where Musk has already been shown to receive content that could cause violence,” Harbath said.

The overall environment “looks much worse than it did at this time in 2020,” added Paul M. Barrett, deputy director of NYU’s Stern Center. “The gasoline is splashing everywhere and I’m afraid the matches will be thrown and God help us.”