As Artificial Intelligence Spreads on Social Media, Users Struggle to Know What to Trust

By: Zoe Santos, arts, culture, and sports reporter

BLACKSBURG, Va. (Dec.11, 2025)- Virginia Tech sophomore Cooper Teich is looking at an AI-generated image of an AI influencer posing with Elon Musk.

Artificial intelligence has become increasingly visible on social media, shaping what users see, share, and believe online. Once limited to photo filters and automated captions, AI now generates realistic videos, images, and digital personas that blend seamlessly into everyday feeds. As the technology spreads across platforms such as Instagram and TikTok, users are left to question what is real and how that uncertainty is reshaping online culture.

Virginia Tech sophomore Cooper Teich said artificial intelligence appears in her social media feeds multiple times a day, often without a clear indication that the content is not real. While some posts are clearly labeled or exaggerated, others resemble authentic news footage or personal content shared by real users.

​​BLACKSBURG, Va. (Dec.11, 2025)- Cooper Teich, a Virginia Tech sophomore, poses for a photo.

“There are videos where something bad happens, and you don’t know if it actually happened,” Teich said. “I don’t know what to believe.”

Teich said the growing presence of AI-generated content has changed how she engages with social media. She now scrolls more cautiously, pauses more frequently, and checks comment sections for context before accepting videos at face value. What once felt like passive consumption has become an active process of verification.

She said the emotional impact of AI-generated content is often immediate, particularly when videos depict emergencies, violence, or distressing situations. Even when the content is later identified as artificial, the initial reaction remains.

“You still feel something when you see it,” Teich said. “Even if you find out it’s fake, the reaction already happened.” 

The uncertainty surrounding AI-generated content became more apparent during Thanksgiving break, when a family member showed Teich a video he believed depicted a serious car crash near his home.

“He thought it happened on his street,” Teich said. “He was really concerned and went outside to check.”

The video was entirely generated by artificial intelligence. 

Teich said moments like that illustrate how AI-generated content affects more than just younger users who are accustomed to questioning what they see online. Older adults, she said, are often more likely to accept realistic videos at face value, especially when they resemble local news footage or familiar environments.

“Imagine how often that happens when no one’s there to explain it,” she said. 

“I don’t know what to believe.”
– Cooper Teich, Virginia Tech sophomore

While younger users may be quicker to suspect a video is AI, Teich said the responsibility to interpret and verify content increasingly falls on individuals, regardless of age. That responsibility creates a culture in which uncertainty is normalized, and skepticism becomes necessary for everyday media consumption.

A screenshot from the Pew Research Center website shows differences between U.S. adults and AI experts in how they view artificial intelligence’s future impact.

Concerns about artificial intelligence extend beyond individual experiences. A 2025 Pew Research Center report found a large divide between how U.S. adults and AI experts view the technology’s future. About half of AI experts surveyed said artificial intelligence will have a positive effect on society, while only a small share of U.S. adults expressed the same optimism.

The gap suggests that while those working most closely with AI tend to see its potential benefits, the broader public remains far more cautious, reflecting a cultural disconnect between technological development and public trust as artificial intelligence becomes more visible in daily life. 

​​Carolyn Kogan, Virginia Tech Adjunct Instructor, poses for a photo. (Image courtesy of Carolyn Kogan) 


Carolyn Kogan, an adjunct professor at Virginia Tech who studies online behavior and digital culture, said artificial intelligence intensifies long-standing issues on social media by increasing realism while reducing accountability.

“When accountability is lowered, people react more emotionally and question less,” Kogan said. “AI makes that problem worse because it looks real.”

Kogan said misinformation is not new to social media, but artificial intelligence allows false or misleading content to spread faster and appear more convincing than before. Visual realism, she said, increases the likelihood that users will engage with content emotionally before evaluating its accuracy.

“Images and videos carry a different kind of authority,” Kogan said. “People trust what they can see.”

She explained that social media has long encouraged users to present idealized, public-facing versions of themselves, what sociologists refer to as “front stage” behavior. Artificial intelligence, she said, accelerates that process by removing the human element entirely.

“People already curate a front-facing version of their lives online,” Kogan said. “AI takes that one step further by removing the human altogether.”

Without a real person behind the content, accountability becomes increasingly abstract. AI-generated images and videos can circulate widely without a clear creator, making it difficult to determine who is responsible when the content is misleading or harmful. 

“When accountability is lowered, people react more emotionally and question less.”
– Carolyn Kogan, adjunct professor at Virginia Tech

That lack of accountability, Kogan said, contributes to a culture where skepticism is necessary but not always practiced.

“Not everyone has the same ability or awareness to question what they’re seeing,” she said.

On platforms such as Instagram, artificial intelligence appears in both obvious and subtle ways. In addition to AI-generated videos and images, some accounts feature AI influencers: digital personas designed to look and behave like real content creators. These accounts often post lifestyle content, promote products, and interact with followers, sometimes without clear disclosure that they are not human.

While AI influencers represent only one segment of AI-driven content online, Kogan said their presence reflects a broader cultural shift in how social media operates.

“These platforms reward engagement, not authenticity,” Kogan said. “If something performs well, it gets amplified, whether it’s real or not.”

Teich said encountering AI influencers has made her more skeptical of what appears in her feed.

“You’ll see someone who looks completely real, and then you find out they don’t even exist,” she said. “It makes you stop and question everything else you’re seeing.”

She said that realization has changed how she interacts with influencers more broadly, including human creators who use heavy editing or undisclosed AI tools.

Social media companies have begun responding to growing concerns about artificial intelligence. Meta, which owns Instagram and Facebook, now requires creators to label content that has been significantly altered or generated by AI. TikTok and YouTube have introduced similar disclosure policies for realistic AI content.
The policies are intended to help users better understand what they are seeing and reduce the spread of misleading content. However, enforcement varies across platforms, and labels are not always immediately visible to viewers.

AI-generated videos and images can still circulate widely before users realize the content is artificial, particularly when posts are reposted, edited, or shared without context.

Teich said labels can be helpful, but do not fully address the problem.

“If it looks real, people are going to believe it at first,” she said.

She also questioned the ethics of allowing highly realistic AI content to circulate freely in the same spaces as authentic photos and videos.

“I don’t think it’s ethical,” Teich said. “It makes you question what social media is even supposed to be.”

Despite growing skepticism, Teich said avoiding social media altogether feels unrealistic. Like many college students, she relies on platforms such as Instagram and TikTok for communication, entertainment, and information, even as trust in what appears online continues to erode.

That reliance on social media, Kogan said, reflects a broader cultural reality. Social media platforms are deeply embedded in daily life, making disengagement difficult even for users who are aware of the risks.

“When people can’t trust what they’re seeing, it affects how they interact with content and with each other,” Kogan said. “It changes how relationships, information, and identity function online.”

Kogan said artificial intelligence forces users to confront those issues more directly, pushing questions of trust and authenticity to the forefront of digital culture.

For Teich, navigating social media now requires her to be skeptical of anything she sees. She scrolls more carefully, questions videos that provoke strong emotional reactions, and relies on external context to determine whether content is credible.

“It just makes everything feel less certain,” she said.

As artificial intelligence becomes harder to separate from reality, users are left to adapt in real time. In a digital environment where fabricated and authentic content coexist, the ability to question what appears on a screen has become an essential part of social media use and a defining feature of online culture. 

Leave a comment