أخبار العالم

Why Charlie Kirk assassination videos are still spreading online


Graphic videos of conservative activist Charlie Kirk’s assassination continued to spread across social media platforms Thursday, with many companies choosing to put the video behind content warnings rather than taking it down entirely.

On YouTube and Meta platforms, videos that showed the moment Kirk was hit by the bullet required users to acknowledge that they were willing to see sensitive content.

On other platforms, including X and TikTok, many of the videos remained easily accessible without any warning.

The spread of videos depicting violent incidents, like shootings, has been a perennial issue for social media platforms, complicated in recent years by a shift away from aggressive, human-based moderation. Most companies still have policies either banning or limiting the spread of gory videos.

And while the backlash against moderation has generally been led by conservatives, some Republicans have called for action over the video of Kirk’s killing. Rep. Marjorie Taylor Greene, R-Ga., said Wednesday that she hoped the video was removed from social media platforms, and Rep. Anna Paulina Luna, R-Fla., called on X CEO Elon Musk, Meta CEO Mark Zuckerberg and TikTok to take down videos of the shooting.

“These are not the only graphic videos of horrifying murders circulating — at some point, social media begins to desensitize humanity. We must still value life,” she wrote. “Please take them down.”

Luna later praised the social media companies, saying there had been “full cooperation,” despite most of them leaving versions of the videos online. X did not appear to respond to Luna publicly and did not respond to a request for comment.

Moderating social media platforms with millions of users has proven a constant logistical challenge for their owners. Some technological solutions have helped, such as software that can quickly identify re-uploaded videos, but users have historically been able to quickly modify content to get around those systems.

That progress has been stymied by a broader pullback on moderation, according to Hany Farid, a professor of computer science at the University of California Berkeley and an expert in the intersection of technology and society.

“We have seen limited success in preventing the spread of violence and extremism online,” Farid said. “This has been particularly true in the last year, when most major online platforms have de-prioritized trust and safety.”

Social media platforms have a mixed collection of policies pertaining to gory and graphic videos.

TikTok bans “gory, gruesome, disturbing, or extremely violent content” in its “shocking and graphic content” policy but allows some carveouts for videos it defines as “in the public interest.”

Uncensored videos of Kirk’s shooting were available on TikTok without content warnings Thursday afternoon, according to a search of the platform. Some graphic videos of the attack had been online for over 20 hours. After NBC News contacted the platform, some of the videos were removed, and the platform placed “sensitive content” screens to click through on other videos of the shooting.

TikTok’s guidelines note that “content is restricted (18 years and older) and ineligible for the FYF if it shows human or animal blood, extreme physical fighting, or graphic footage of events that would otherwise violate our rules but are in the public interest to view.” The FYF is TikTok’s algorithmically driven For You Feed.

“These horrific violent acts have no place in our society,” TikTok spokesperson Jamie Favazza shared in a statement. “We remain committed to proactively enforcing our Community Guidelines and have implemented additional safeguards to prevent people from unexpectedly viewing footage that violates our rules.”

The persistent availability of Kirk’s assassination footage comes just weeks after TikTok announced it will lay off hundreds of human content moderators in favor of increased AI-enabled content moderation — it was at least the third round of content moderator firings at TikTok in the past year, with previous layoffs reported in October 2024 and February 2025.

As of Thursday afternoon, Meta was allowing video of the Utah shooting to remain on its platforms but with a “sensitive content” warning label and restricted to users aged 18 and over. On Instagram, however, users can still encounter the full video of the assassination without warning. When searching for videos of Kirk’s assassination, some would autoplay on the search page. A Meta spokesperson referred NBC News to its policies on violent and graphic content.

YouTube was also permitting graphic video of the shooting to remain online behind a content warning screen.

A YouTube spokesperson said, “Our hearts are with Charlie Kirk’s family following his tragic death. We are closely monitoring our platform and prominently elevating news content on the homepage, in search and in recommendations to help people stay informed.”

In response to the proliferation of graphic videos of Wednesday’s shooting, the Global Internet Forum to Counter Terrorism (GIFCT), which maintains a database used by tech companies to help identify re-uploaded violent content, triggered an emergency protocol designed to limit the spread of videos depicting terrorism or extreme violence. Formed in the wake of the 2019 terrorist attacks in Christchurch, New Zealand, GIFCT is composed of tech organizations, including Meta, TikTok, YouTube, Microsoft, and others.

“We are working closely with our members and multi-stakeholder partners to monitor developments, identify potential related online content, and provide support to members in addressing content relating to this incident,” said Jake Lebsack, GIFCT’s lead spokesperson.