
Researchers Qiang Liu, Lin Wang, and Mengyu Luo from the University of Science and Technology in Shanghai set out to examine how deepfake technology affects the perception of information and users` trust in media. In their article When Seeing Is Not Believing: Self-efficacy and Cynicism in the Era of Intelligent Media, published in Humanities and Social Sciences Communications, they conducted two experiments involving 1,826 participants, analyzing how cynicism toward information changes depending on users` ability to recognize AI-generated content.
The experiments revealed that:
- Individuals with low self-assessment in recognizing AI content are more likely to question the authenticity of news that is personally significant to them.
- Low-risk content paradoxically raises more skepticism than content considered high-risk.
- Users who repeatedly struggle to assess the authenticity of deepfakes lose confidence in their abilities and abandon efforts to verify information.
This phenomenon leads to the so-called "apathetic reality," where audiences choose indifference over critical thinking regarding media consumption.
Why Are We Losing Trust in Media?
The growing cynicism toward AI-generated information is not just a technological issue but also a matter of how audiences process content. Research suggests that users engage more in content analysis when they feel the topic directly affects them.
Data shows that:
Factor | Impact on AI Self-Assessment | Impact on Cynicism |
---|---|---|
High content relevance | Increased confidence | Reduced cynicism |
Low content relevance | Decreased confidence | Increased cynicism |
High-risk news | Greater inclination to verify | Lower level of cynicism |
Low-risk news | Less interest in verification | Higher level of cynicism |
The study`s authors highlight the effect of cognitive fatigue. When users repeatedly encounter situations where they cannot distinguish deepfakes from real content, they stop making an effort to check. Ultimately, instead of verifying authenticity, they begin treating all information as potentially unreliable.
What Can We Do?
Experts emphasize that the solution lies not only in developing deepfake detection technologies but also in shaping new media literacy models.
- Social media platforms should implement more advanced mechanisms for labeling AI-generated content.
- Users should be trained not only in recognizing fake news but also in consciously processing information in an era of informational chaos.
- Journalists should make greater use of content verification tools and build communication strategies based on source transparency.
Liu, Wang, and Luo`s study found that even a small increase in users` self-confidence regarding AI leads to a significant reduction in cynicism and greater engagement in content analysis. This means that education and tools supporting content verification can help audiences regain control over what they consider true.
The Future of Trust in Information
Deepfake news is a challenge we will face for years to come. In the age of artificial intelligence, it is not just technology that determines what we believe but also our ability to recognize and critically analyze content. If we do not begin developing skills to navigate the world of synthetic information, we may find ourselves in a reality where we cannot even trust what we see with our own eyes.
* * *
Article by Liu, Q., Wang, L., Luo, M. (2025) When Seeing Is Not Believing: Self-efficacy and Cynicism in the Era of Intelligent Media, published in Nature Humanities and Social Sciences Communications, available at
https://www.nature.com/articles/s41599-025-04594-5
COMMERCIAL BREAK
New articles in section Media industry
Communication gap. Is anyone listening to Polish women?
Krzysztof Fiedorek
Brands claim they understand women. Media say they speak their language. Meanwhile the report "Polki 2025" shows that most messages still miss the mark. Women do not want empty slogans. They expect a dialogue that truly relates to them.
Most medical influencer posts on TikTok are FALSE
KFi
Researchers from East Carolina University Health Medical Center analysed 120 TikTok videos tagged with hashtags such as #naturalparenting, #antivaccine, and #holistichealth. The results of their study leave no doubt.
Dead internet theory is a fact. Bots now outnumber people online
Krzysztof Fiedorek
Already 51% of global internet traffic is generated by bots, not people. As many as two-thirds of accounts on X are likely bots, and on review platforms, three out of ten reviews weren't written by a human. Do you feel something is off online? It's not paranoia. In 2025, it's a reality.
See articles on a similar topic:
Gen Z Will Force Brands to Tell the Truth. GWI Report and Forecasts
Krzysztof Fiedorek
They value authenticity and brand transparency, preferring socially engaged companies. Young people see technology as a tool for growth, not just entertainment. In relationships, they prioritize genuine connections despite being highly active online. What do we know about Gen Z, and what does this mean for marketing? And beyond.
Can a Robot Be Good Boss? Researchers from SWPS Look for Answers
SWPS
A robot giving orders at work is no longer a science fiction scenario - it's a research topic. Scientists from SWPS University in Poland set out to find out whether a robot can effectively manage human workers.
Yellow Press. What is Yellow Journalism?
Krzysztof Fiedorek
The terms "yellow press" and "yellow journalism" are often used pejoratively to describe journalistic practices focused on sensationalism, gossip, and emotions rather than objective facts. Let’s explore their origins, distinctive features, and impact on society.
Clickbait Uncovered. How Online Headlines Evolved Over 25 Years
Krzysztof Fiedorek
Researchers from the Max Planck Institute analyzed 40 million headlines from the past 25 years. They are getting longer, more emotional, and negative, with a clear influence of clickbait style. Even reputable media use strategies and tricks to grab attention.