illustration: DALL-EResearchers Qiang Liu, Lin Wang, and Mengyu Luo from the University of Science and Technology in Shanghai set out to examine how deepfake technology affects the perception of information and users` trust in media. In their article When Seeing Is Not Believing: Self-efficacy and Cynicism in the Era of Intelligent Media, published in Humanities and Social Sciences Communications, they conducted two experiments involving 1,826 participants, analyzing how cynicism toward information changes depending on users` ability to recognize AI-generated content.
The experiments revealed that:
- Individuals with low self-assessment in recognizing AI content are more likely to question the authenticity of news that is personally significant to them.
- Low-risk content paradoxically raises more skepticism than content considered high-risk.
- Users who repeatedly struggle to assess the authenticity of deepfakes lose confidence in their abilities and abandon efforts to verify information.
This phenomenon leads to the so-called "apathetic reality," where audiences choose indifference over critical thinking regarding media consumption.
Why Are We Losing Trust in Media?
The growing cynicism toward AI-generated information is not just a technological issue but also a matter of how audiences process content. Research suggests that users engage more in content analysis when they feel the topic directly affects them.
Data shows that:
| Factor | Impact on AI Self-Assessment | Impact on Cynicism |
|---|---|---|
| High content relevance | Increased confidence | Reduced cynicism |
| Low content relevance | Decreased confidence | Increased cynicism |
| High-risk news | Greater inclination to verify | Lower level of cynicism |
| Low-risk news | Less interest in verification | Higher level of cynicism |
The study`s authors highlight the effect of cognitive fatigue. When users repeatedly encounter situations where they cannot distinguish deepfakes from real content, they stop making an effort to check. Ultimately, instead of verifying authenticity, they begin treating all information as potentially unreliable.
What Can We Do?
Experts emphasize that the solution lies not only in developing deepfake detection technologies but also in shaping new media literacy models.
- Social media platforms should implement more advanced mechanisms for labeling AI-generated content.
- Users should be trained not only in recognizing fake news but also in consciously processing information in an era of informational chaos.
- Journalists should make greater use of content verification tools and build communication strategies based on source transparency.
Liu, Wang, and Luo`s study found that even a small increase in users` self-confidence regarding AI leads to a significant reduction in cynicism and greater engagement in content analysis. This means that education and tools supporting content verification can help audiences regain control over what they consider true.
The Future of Trust in Information
Deepfake news is a challenge we will face for years to come. In the age of artificial intelligence, it is not just technology that determines what we believe but also our ability to recognize and critically analyze content. If we do not begin developing skills to navigate the world of synthetic information, we may find ourselves in a reality where we cannot even trust what we see with our own eyes.
* * *
Article by Liu, Q., Wang, L., Luo, M. (2025) When Seeing Is Not Believing: Self-efficacy and Cynicism in the Era of Intelligent Media, published in Nature Humanities and Social Sciences Communications, available at
https://www.nature.com/articles/s41599-025-04594-5
COMMERCIAL BREAK
New articles in section Media industry
Why do we believe fakes? Science reveals the psychology of virals
KFi
Why do emotions grab more attention than evidence, and why can a fake authority overshadow scientific data? Researchers from Warsaw University of Technology, Jagiellonian University, and SWPS University in Poland sought the answers. Here are their findings.
Investigative journalism in Europe. Newsrooms face pressure
KFi, Newseria
Media and political representatives point to the difficult situation of investigative journalism in Europe. Newsrooms are reluctant to invest in this segment due to high costs and the large amount of time and effort required. Most of all, however, they fear legal proceedings.
Energy under attack. Disinformation threatens Poland’s power transition
KFi
One in five online messages about energy may be fake. Between 2022 and 2025 nearly 70,000 publications warning and condemning disinformation in this strategic sector were recorded in Polish media. They generated a reach of 1.19 billion impressions.
See articles on a similar topic:
Deepfake Blurs Truth and Falsehood. Human Perception Research
KFi
Studies indicate that only 60% of deepfake images can be correctly identified by humans. As AI begins to dominate content production, the problem of differentiation fatigue grows – users lose confidence in assessing the authenticity of information and fall into cynicism.
Digital Newspapers in Poland
Bartłomiej Dwornik
The three largest distributors of digital press editions in Poland sell around 270,000 e-magazine copies monthly, according to Money.pl analysis. Digital press is mostly read by experienced internet users, managers, and emigrants.
Most medical influencer posts on TikTok are FALSE
KFi
Researchers from East Carolina University Health Medical Center analysed 120 TikTok videos tagged with hashtags such as #naturalparenting, #antivaccine, and #holistichealth. The results of their study leave no doubt.
Safari Surpasses Opera. A New Shift in the Browser Market in Poland
Krzysztof Fiedorek
In the summer of 2024, a historic event occurred in Poland's browser market. In July and August, Safari surpassed Opera on all devices for the first time. Data from the StatCounter report indicates that Apple's browser maintains a steady market share while Opera is gradually but noticeably losing ground.




























