illustration: DALL-EResearchers Qiang Liu, Lin Wang, and Mengyu Luo from the University of Science and Technology in Shanghai set out to examine how deepfake technology affects the perception of information and users` trust in media. In their article When Seeing Is Not Believing: Self-efficacy and Cynicism in the Era of Intelligent Media, published in Humanities and Social Sciences Communications, they conducted two experiments involving 1,826 participants, analyzing how cynicism toward information changes depending on users` ability to recognize AI-generated content.
The experiments revealed that:
- Individuals with low self-assessment in recognizing AI content are more likely to question the authenticity of news that is personally significant to them.
- Low-risk content paradoxically raises more skepticism than content considered high-risk.
- Users who repeatedly struggle to assess the authenticity of deepfakes lose confidence in their abilities and abandon efforts to verify information.
This phenomenon leads to the so-called "apathetic reality," where audiences choose indifference over critical thinking regarding media consumption.
Why Are We Losing Trust in Media?
The growing cynicism toward AI-generated information is not just a technological issue but also a matter of how audiences process content. Research suggests that users engage more in content analysis when they feel the topic directly affects them.
Data shows that:
| Factor | Impact on AI Self-Assessment | Impact on Cynicism |
|---|---|---|
| High content relevance | Increased confidence | Reduced cynicism |
| Low content relevance | Decreased confidence | Increased cynicism |
| High-risk news | Greater inclination to verify | Lower level of cynicism |
| Low-risk news | Less interest in verification | Higher level of cynicism |
The study`s authors highlight the effect of cognitive fatigue. When users repeatedly encounter situations where they cannot distinguish deepfakes from real content, they stop making an effort to check. Ultimately, instead of verifying authenticity, they begin treating all information as potentially unreliable.
What Can We Do?
Experts emphasize that the solution lies not only in developing deepfake detection technologies but also in shaping new media literacy models.
- Social media platforms should implement more advanced mechanisms for labeling AI-generated content.
- Users should be trained not only in recognizing fake news but also in consciously processing information in an era of informational chaos.
- Journalists should make greater use of content verification tools and build communication strategies based on source transparency.
Liu, Wang, and Luo`s study found that even a small increase in users` self-confidence regarding AI leads to a significant reduction in cynicism and greater engagement in content analysis. This means that education and tools supporting content verification can help audiences regain control over what they consider true.
The Future of Trust in Information
Deepfake news is a challenge we will face for years to come. In the age of artificial intelligence, it is not just technology that determines what we believe but also our ability to recognize and critically analyze content. If we do not begin developing skills to navigate the world of synthetic information, we may find ourselves in a reality where we cannot even trust what we see with our own eyes.
* * *
Article by Liu, Q., Wang, L., Luo, M. (2025) When Seeing Is Not Believing: Self-efficacy and Cynicism in the Era of Intelligent Media, published in Nature Humanities and Social Sciences Communications, available at
https://www.nature.com/articles/s41599-025-04594-5
COMMERCIAL BREAK
New articles in section Media industry
How to silence fake news? Young Latinos support internet censorship
Krzysztof Fiedorek
In Brazil, a court shut down platform X, cutting off 40 million users. In Colombia, 70% of citizens want information control, and in Chile, 75% of young people support censoring fake news. Is information security replacing freedom of speech as a new trend? [STUDY]
Communication gap. Is anyone listening to Polish women?
Krzysztof Fiedorek
Brands claim they understand women. Media say they speak their language. Meanwhile the report "Polki 2025" shows that most messages still miss the mark. Women do not want empty slogans. They expect a dialogue that truly relates to them.
Most medical influencer posts on TikTok are FALSE
KFi
Researchers from East Carolina University Health Medical Center analysed 120 TikTok videos tagged with hashtags such as #naturalparenting, #antivaccine, and #holistichealth. The results of their study leave no doubt.
See articles on a similar topic:
How Journalists Use Social Media
Bartłomiej Dwornik
Primarily, they seek inspiration from blogs and, less frequently, from Facebook. They rarely trust what they find, often approaching it with caution. Credibility does not necessarily correlate with attractiveness.
Influencers 2024. Data, Facts, and Stories from the UNESCO Report
Krzysztof Fiedorek
As many as 68% of digital creators are nano-influencers. One in three has experienced hate speech, and over 60% do not thoroughly verify information before publishing. Moreover, only half disclose their content sponsors. The findings from the "Behind The Screens" report are both inspiring and alarming.
Algorithmic personalization study. Who and how understands digital media
KFi
Most internet users believe that everyone sees the same content online. Meanwhile, algorithms personalize messages so effectively that a young woman with higher education receives different information than her father. Researchers reveal who truly understands the mechanisms.
Influencers Earn Too Much. No Fluff Jobs Report
KrzysztoF
According to nearly 70% of Poles, influencers earn too much, and 54% feel the least affection for them out of all professions. Only politicians receive equally low regard among respondents surveyed by No Fluff Jobs. On the other hand, nurses and… farmers are considered underpaid.





























