
Researchers Qiang Liu, Lin Wang, and Mengyu Luo from the University of Science and Technology in Shanghai set out to examine how deepfake technology affects the perception of information and users` trust in media. In their article When Seeing Is Not Believing: Self-efficacy and Cynicism in the Era of Intelligent Media, published in Humanities and Social Sciences Communications, they conducted two experiments involving 1,826 participants, analyzing how cynicism toward information changes depending on users` ability to recognize AI-generated content.
The experiments revealed that:
- Individuals with low self-assessment in recognizing AI content are more likely to question the authenticity of news that is personally significant to them.
- Low-risk content paradoxically raises more skepticism than content considered high-risk.
- Users who repeatedly struggle to assess the authenticity of deepfakes lose confidence in their abilities and abandon efforts to verify information.
This phenomenon leads to the so-called "apathetic reality," where audiences choose indifference over critical thinking regarding media consumption.
Why Are We Losing Trust in Media?
The growing cynicism toward AI-generated information is not just a technological issue but also a matter of how audiences process content. Research suggests that users engage more in content analysis when they feel the topic directly affects them.
Data shows that:
Factor | Impact on AI Self-Assessment | Impact on Cynicism |
---|---|---|
High content relevance | Increased confidence | Reduced cynicism |
Low content relevance | Decreased confidence | Increased cynicism |
High-risk news | Greater inclination to verify | Lower level of cynicism |
Low-risk news | Less interest in verification | Higher level of cynicism |
The study`s authors highlight the effect of cognitive fatigue. When users repeatedly encounter situations where they cannot distinguish deepfakes from real content, they stop making an effort to check. Ultimately, instead of verifying authenticity, they begin treating all information as potentially unreliable.
What Can We Do?
Experts emphasize that the solution lies not only in developing deepfake detection technologies but also in shaping new media literacy models.
- Social media platforms should implement more advanced mechanisms for labeling AI-generated content.
- Users should be trained not only in recognizing fake news but also in consciously processing information in an era of informational chaos.
- Journalists should make greater use of content verification tools and build communication strategies based on source transparency.
Liu, Wang, and Luo`s study found that even a small increase in users` self-confidence regarding AI leads to a significant reduction in cynicism and greater engagement in content analysis. This means that education and tools supporting content verification can help audiences regain control over what they consider true.
The Future of Trust in Information
Deepfake news is a challenge we will face for years to come. In the age of artificial intelligence, it is not just technology that determines what we believe but also our ability to recognize and critically analyze content. If we do not begin developing skills to navigate the world of synthetic information, we may find ourselves in a reality where we cannot even trust what we see with our own eyes.
* * *
Article by Liu, Q., Wang, L., Luo, M. (2025) When Seeing Is Not Believing: Self-efficacy and Cynicism in the Era of Intelligent Media, published in Nature Humanities and Social Sciences Communications, available at
https://www.nature.com/articles/s41599-025-04594-5
COMMERCIAL BREAK
New articles in section Media industry
Influencers and social video rule information. Digital News Report 2025
Krzysztof Fiedorek
Seconds of vertical clips set the future of news. TikTok, YouTube and an army of influencers pull viewers away from TV sets and newspaper pages. Whoever masters this new pulse seizes not only attention but also control of the story.
Cyberviolence and hate disguised as a joke. The RAYUELA report on youth
Krzysztof Fiedorek
The study conducted in five countries reveals a harsh truth. Online violence is not evenly distributed. It is a digital map of prejudice that hurts the most those who stand out the most. "It’s just a joke." That’s how violence often begins. Young people go through it in silence.
Trust in social media. Youtube beats TikTok and X
Krzysztof Fiedorek
Do we really trust social media? A new study reveals major differences in how top platforms are rated. Trust goes where there's authenticity, not just algorithms. The role of people is growing while brand influence is fading.
See articles on a similar topic:
Social Media and Relationships. Interesting Research from Palestine
KFi
What does love look like in the digital age? Does technology bring people closer or push them apart? In an era where Facebook and Instagram replace dinner table conversations, social media has become a new space for marital relationships. Researchers from An-Najah National University examined how technology can build bonds but also sow uncertainty.
Violence in Media and Child Rearing
Małgorzata Więczkowska
The influence of mass media on individuals is now an undisputed fact. There is no place today where this impact on religious, moral, political, social, or educational attitudes cannot be felt.
Selfish Trap: A New Social Influence Technique
Krzysztof Fiedorek
Three psychologists from SWPS University have described a social influence method suggesting people are more willing to complete a task if it highlights a quality important to them, such as loyalty, intelligence, or rationality.
Milgram Experiment 2023. AI Can Encourage Violence
KrzysztoF
Researchers from SWPS University replicated the famous Milgram experiment, in which participants were instructed to inflict pain on another person under the authority’s command. This time, the authority was a robot. It’s the first study showing that people are willing to harm another person when a robot commands them to do so.