أخبار العالم

AI is intensifying a ‘collapse’ of trust online, experts say


For years, people could largely trust, at least instinctively, that seeing was believing. Now, what’s fake often looks real and what’s real often looks fake.

Within the first week of 2026, that has already become a conundrum many media experts say will be hard to move past, thanks to advances in artificial intelligence.

President Donald Trump’s Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her.

The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online — especially when it mixes with authentic evidence.

“As we start to worry about AI, it will likely, at least in the short term, undermine our trust default — that is, that we believe communication until we have some reason to disbelieve,” said Jeff Hancock, founding director of the Stanford Social Media Lab. “That’s going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces.”

Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques.

Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said.

On Saturday, Trump shared a photo on his verified Truth Social account of the deposed Venezuelan leader Nicolás Maduro blindfolded and handcuffed aboard a Navy assault ship. Shortly afterward, unverified images surrounding the capture — some of which were then turned into AI-generated videos — began to flood other social media platforms.

As real celebrations unfolded, X owner Elon Musk was among those sharing what appeared to be an AI-generated video of Venezuelans thanking the U.S. for capturing Maduro.

AI-generated evidence has already made its way into courtrooms. AI deepfakes have also fooled officials — late last year, a flood of AI-generated videos online portrayed Ukrainian soldiers apologizing to the Russian people and surrendering to Russian forces en masse.

Hancock said that even as much of the misinformation online still comes through more traditional avenues, such as people misappropriating real media to paint false narratives, AI is rapidly dumping more fuel on the fire.

“In terms of just looking at an image or a video, it will essentially become impossible to detect if it’s fake. I think that we’re getting close to that point, if we’re not already there,” he said. “The old sort of AI literacy ideas of ‘let’s just look at the number of fingers’ and things like that are likely to go away.”

Renee Hobbs, a professor of communication studies at the University of Rhode Island, said the main struggle for researchers who study AI is that people face cognitive exhaustion as they try to navigate the sheer volume of real and synthetic content online. That makes it harder for them to sift through what’s real and what’s not.

The old sort of AI literacy ideas of ‘let’s just look at the number of fingers’ and things like that are likely to go away

Jeff Hancock, founding director of the Stanford Social Media Lab

“If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It’s a coping mechanism,” Hobbs said. “And then when people stop caring about whether something’s true or not, then the danger is not just deception, but actually it’s worse than that. It’s the whole collapse of even being motivated to seek truth.”

She and other experts are working to figure out how to incorporate generative AI into media literacy education. The Organization for Economic Co-operation and Development, an intergovernmental body of democratic countries that collaborate to develop policy standards, is scheduled to release a global Media & Artificial Intelligence Literacy assessment for 15-year-olds in 2029, for example.

Even some social media giants that have embraced generative AI appear wary of its infiltration into people’s algorithms.

In a recent post on Threads, the head of Instagram, Adam Mosseri, touched on his concerns surrounding AI misinformation’s becoming more common across platforms.

“For most of my life I could safely assume that the vast majority of photographs or videos that I see are largely accurate captures of moments that happened in real life,” he wrote. “This is clearly no longer the case and it’s going to take us, as people, years to adapt.”

Mosseri predicted that internet users will “move from assuming what we see is real by default, to starting with skepticism when we see media, and paying much more attention to who is sharing something and why they might be sharing it. This is going to be incredibly uncomfortable for all of us because we’re genetically predisposed to believing our eyes.”

Hany Farid, a professor of computer science at the UC Berkeley School of Information, said his recent research on deepfake detection has found that people are just as likely to say something real is fake as they are to say something fake is real. The accuracy rate worsens significantly when people are shown content with political undertones — because then confirmation bias kicks in.

“When I send you something that conforms to your worldview, you want to believe it. You’re incentivized to believe it,” Farid said. “And if it’s something that contradicts your worldview, you’re highly incentivized to say, ‘Oh, that’s fake.’ And so when you add that partisanship onto it, it blows everything out of the water.”

People are also likelier to immediately trust those they’re familiar with — such as celebrities, politicians, family members and friends — so AI likenesses of such figures will be even likelier to dupe people as they get more realistic, said Siwei Lyu, a professor of computer science at the University at Buffalo.

Lyu, who helps maintain an open-source AI detection platform called DeepFake-o-meter, said everyday internet users can boost their AI detection skills simply by paying attention. Even if they don’t have the ability to analyze every bit of media they come across, he said, people should at least ask themselves why they trust or distrust what they see.

“In many cases, it may not be the media itself that has anything wrong, but it’s put up in the wrong context or by somebody we cannot totally trust,” Lyu said. “So I think, all in all, common awareness and common sense are the most important protection measures we have, and they do not need special training.”