illustration: DALL-EResearchers Qiang Liu, Lin Wang, and Mengyu Luo from the University of Science and Technology in Shanghai set out to examine how deepfake technology affects the perception of information and users` trust in media. In their article When Seeing Is Not Believing: Self-efficacy and Cynicism in the Era of Intelligent Media, published in Humanities and Social Sciences Communications, they conducted two experiments involving 1,826 participants, analyzing how cynicism toward information changes depending on users` ability to recognize AI-generated content.
The experiments revealed that:
- Individuals with low self-assessment in recognizing AI content are more likely to question the authenticity of news that is personally significant to them.
- Low-risk content paradoxically raises more skepticism than content considered high-risk.
- Users who repeatedly struggle to assess the authenticity of deepfakes lose confidence in their abilities and abandon efforts to verify information.
This phenomenon leads to the so-called "apathetic reality," where audiences choose indifference over critical thinking regarding media consumption.
Why Are We Losing Trust in Media?
The growing cynicism toward AI-generated information is not just a technological issue but also a matter of how audiences process content. Research suggests that users engage more in content analysis when they feel the topic directly affects them.
Data shows that:
| Factor | Impact on AI Self-Assessment | Impact on Cynicism |
|---|---|---|
| High content relevance | Increased confidence | Reduced cynicism |
| Low content relevance | Decreased confidence | Increased cynicism |
| High-risk news | Greater inclination to verify | Lower level of cynicism |
| Low-risk news | Less interest in verification | Higher level of cynicism |
The study`s authors highlight the effect of cognitive fatigue. When users repeatedly encounter situations where they cannot distinguish deepfakes from real content, they stop making an effort to check. Ultimately, instead of verifying authenticity, they begin treating all information as potentially unreliable.
What Can We Do?
Experts emphasize that the solution lies not only in developing deepfake detection technologies but also in shaping new media literacy models.
- Social media platforms should implement more advanced mechanisms for labeling AI-generated content.
- Users should be trained not only in recognizing fake news but also in consciously processing information in an era of informational chaos.
- Journalists should make greater use of content verification tools and build communication strategies based on source transparency.
Liu, Wang, and Luo`s study found that even a small increase in users` self-confidence regarding AI leads to a significant reduction in cynicism and greater engagement in content analysis. This means that education and tools supporting content verification can help audiences regain control over what they consider true.
The Future of Trust in Information
Deepfake news is a challenge we will face for years to come. In the age of artificial intelligence, it is not just technology that determines what we believe but also our ability to recognize and critically analyze content. If we do not begin developing skills to navigate the world of synthetic information, we may find ourselves in a reality where we cannot even trust what we see with our own eyes.
* * *
Article by Liu, Q., Wang, L., Luo, M. (2025) When Seeing Is Not Believing: Self-efficacy and Cynicism in the Era of Intelligent Media, published in Nature Humanities and Social Sciences Communications, available at
https://www.nature.com/articles/s41599-025-04594-5
COMMERCIAL BREAK
New articles in section Media industry
How artificial intelligence misrepresents the news. PBC analysis
Sylwia Markowska
In news summaries generated by the most popular models in Polish, as many as 46% of responses contained at least one significant error, 27% had serious issues with sources (missing, misleading, or incorrect), and 19% contained hallucinations and outdated information.
Children and communication with machines. Experiment by SWPS researchers
SWPS
How do primary school students treat humanoid robots? Researchers from SWPS University have shown that in most cases, children relate to robots politely, and younger children and girls more often perceive them as possessing human characteristics.
Streaming platforms in Poland. What criteria determine the choice
Paweł Sobczak
Price, indicated by 54.2% of respondents, and subject matter (54% of indications) are the most important factors influencing users' choice of content on streaming services. The service brand is mentioned by 18.1% of those surveyed.
See articles on a similar topic:
Information bubbles. Study of Instagram, Tik Tok and You Tube users
Urszula Kaczorowska
A staggering 96 percent of the time people spend online is spent on anything but consuming information. This, says Professor Magdalena Wojcieszak means ‘we have over-inflated the issue of information bubbles and disinformation.’
Decline in Trust in Media. Analysis of the Reuters Digital News Report 2024
Krzysztof Fiedorek
The “Digital News Report 2024” by the Reuters Institute for the Study of Journalism highlights alarming trends concerning the declining interest in news and decreasing trust in media. These changes are not temporary but have become a long-term trend.
Trust in social media. Youtube beats TikTok and X
Krzysztof Fiedorek
Do we really trust social media? A new study reveals major differences in how top platforms are rated. Trust goes where there's authenticity, not just algorithms. The role of people is growing while brand influence is fading.
Virtual Influencers Perceived as More Authentic than Real Ones
Agnieszka Kliks-Pudlik
Virtual influencers are fictional, generated characters that imitate the appearance and behaviour of real people. They have millions of followers. They are perceived by Gen-Alpha as even more authentic than real people, which creates many challenges, says Dr. Ada Florentyna Pawlak.





























