23.10.2023 Media industry
We Trust AI-Generated Fake News More Than Human-Created News
KrzysztoF
Generating and spreading misinformation with AI can negatively affect various areas of life, including global healthcare. To examine how AI-created text impacts the comprehension of information, researchers from the University of Zurich analyzed tweets generated by GPT-3.
Poczytaj artykuł
Researchers generated tweets using GPT-3, containing both factual and false information on topics such as vaccines, 5G technology, COVID-19, and evolution theories. These tweets were compared with messages written by humans. They then created a survey and asked approximately 700 respondents from various countries, aged between 42 and 72 with similar educational backgrounds, to participate.
Synthetic or Organic?
The first question asked whether a given tweet appeared synthetic (AI-generated) or organic (human-created). The second evaluated whether the tweet`s content was scientifically true or false.
- AI-generated information was more convincing to respondents. People rated AI-produced content as more truthful compared to human-written content. Additionally, participants tended to trust false information from AI more than similar misinformation from humans. This makes GPT a double-edged sword, with potentially positive applications but also risks in generating persuasive false information, says Dr Federico Germani from the Institute of Biomedical Ethics and History of Medicine, University of Zurich.
The study also examined AI`s potential to identify misinformation. Researchers asked GPT-3 to assess which tweets were true and which were misleading. Like the respondents, the AI struggled with accuracy.
- According to our study, distinguishing between human-written and AI-generated information is impossible. Perhaps people using this technology daily might recognize differences. However, this will become irrelevant as new models evolve, making it impossible to differentiate AI and human-generated text in the medium to long term, says Dr Federico Germani.
Artificial Misinformation
The researchers propose revising current communication models, where humans create content and AI assists in evaluation. They believe a well-designed campaign could be crafted by guiding AI to write more accessible texts, though the accuracy of AI-generated content should be verified by trained humans. This model could be particularly relevant when clear and rapid communication with large groups is necessary.
- This is one of the first studies highlighting the issue of AI in the context of misinformation. It is important to note that our study occurred in experimental conditions, not real-world scenarios. It sheds light on the societal impact of AI-generated misinformation by observing how false information circulates on social media and the reactions it provokes. Future research will focus on individual behavioral impacts and public health consequences, emphasizing a new direction in studying human interaction with AI in entirely novel environments, highlights the researcher from the University of Zurich.
Similar findings are presented in reports by Georgetown University and Stanford Internet Observatory researchers, who analyzed AI`s potential for spreading misinformation and predicted increasing algorithm-based propaganda.
Content Labeling or Media Education?
Preparing society to interpret information correctly is as vital as collaboration between AI creators, social media platforms, and regulating access to AI development tools. Labeling AI-generated content could also play a significant role.
- We face an overload of information accessible to everyone, including a significant share of false content. With AI, both false and true information will likely increase. We cannot assume control over this, says Dr Federico Germani. - During the COVID-19 pandemic, we saw that censorship can limit misinformation, but it is a short-term solution. In the medium and long term, it undermines trust in healthcare institutions. Thus, alternative strategies are necessary.
The researcher believes the only effective approach to combat misinformation is media education, enabling people to evaluate the truthfulness of information based on specific characteristics.
In Poland, the Good Practices Code was created by NASK to combat misinformation. It provides journalists, public figures, and audiences with guidelines to understand misinformation processes, identify harmful content, and prevent its spread.
Poles Divided on Trust and Acceptance of Artificial Intelligence
Artificial intelligence became a hot topic over the past year, especially after the release of ChatGPT. However, the media buzz reduced Poles` trust in AI and openness to new technologies. According to research by Digital Poland Foundation, opinions are highly polarized:
- 24% see more benefits than risks in AI,
- 27% have the opposite view,
- 25% advocate halting further development,
- 33% support its continuation.
Regarding trust, one-third of Poles are willing to share personal information with AI, while an equal proportion distrusts AI and refuses to share their data.
- Last year saw the debut of OpenAI’s ChatGPT, joined by tools like Google Bard and MidJourney. The resulting media frenzy sparked controversies, reducing enthusiasm for AI and increasing fears of job loss due to AI advances, says Piotr Mieczkowski, Managing Director of Digital Poland Foundation.
Media Buzz Surrounding Artificial Intelligence
The latest edition of the report "Technology for Society: Will Poles Become Society 5.0?" by Digital Poland Foundation, GfK Polonia, and T-Mobile Polska reveals the media hype following ChatGPT`s launch diminished trust and openness to new technologies:
- The percentage of optimists fell from 63% last year to 56%.
- There was a significant increase in technology skeptics (over 100% growth to 23%), seeing it as complex, unnecessary, or harmful.
- 64% believe technology creates an artificial world.
- 54% fear robotics and AI threaten jobs.
Despite these concerns, more respondents remain optimistic (56%) than skeptical (23%) about new technologies.
- 55% of Poles know what AI is, while 45% do not, says Piotr Mieczkowski. - A test showed that most associate AI with robotics rather than algorithms suggesting content on streaming platforms.

Frequent Use of AI
88% of Poles are familiar with the term "artificial intelligence," and most claim to have used at least one AI-based solution, with the most common being:
- text translation (49%),
- customer service chatbots (47%),
- virtual assistants (41%).
After defining AI based on OECD standards and the EU`s AI Act proposal, only 56% claimed familiarity with the concept. A detailed knowledge test showed Poles recognize common AI applications but are unaware of uses like spam filters or weather forecasting.
Tolerance and Acceptance of Artificial Intelligence
- 85% of respondents express positive emotions toward AI, such as tolerance and acceptance, while 15% oppose or disapprove of AI`s development, says Digital Poland Foundation`s Managing Director.
Supervision as a Trust Factor
Poles are split on trusting AI. A third are willing to trust and share data with AI, while another third refuses. Human oversight and improved legal frameworks are key to increasing trust, as indicated by 40% of respondents.
- Privacy and human oversight of AI systems are crucial. When AI tools are presented as human-supervised, acceptance rises by nearly 50%, highlights Piotr Mieczkowski.
AI is seen as a transformative technology, already reshaping the economy, society, and job market. While some advocate halting development, others see it as essential for addressing global issues like climate change and healthcare shortages. Education is vital to fostering acceptance and understanding of AI, enabling society to harness its potential while mitigating risks.
source: Newseria
COMMERCIAL BREAK
New articles in section Media industry
Equality and Diversity in Media: European Broadcasting Union Report
KFi
European public media are increasingly focusing on diversity, equality, and inclusion (DEI) as the foundation of their operations. Public broadcasters in Europe are implementing diversity strategies - both in content and within their teams. The findings from the report are clear: although progress is visible, many challenges remain.
How the Media Talk (or Stay Silent) About Climate. Reuters Institute Report
Krzysztof Fiedorek
Although climate change is becoming increasingly noticeable worldwide, the media have failed to maintain growing interest in the topic. The report "Climate Change and News Audiences 2024" shows that audience engagement with climate topics has remained almost unchanged for several years.
Clickbait Uncovered. How Online Headlines Evolved Over 25 Years
Krzysztof Fiedorek
Researchers from the Max Planck Institute analyzed 40 million headlines from the past 25 years. They are getting longer, more emotional, and negative, with a clear influence of clickbait style. Even reputable media use strategies and tricks to grab attention.
See articles on a similar topic:
New Individual Mass Media (Mass Self Communication)
Grzegorz D. Stunża
In the latest issue of "Le Monde Diplomatique," there’s an article by Manuel Castells titled "Individual Mass Media." The author points out that media, once subjective and often party-affiliated (as with newspapers), only briefly moved away from one-sidedness when under various pressures.
Fake News in Poland. Challenges in Assessing Information Credibility
RINF
One in four information consumers relies on sources where verifying credibility is a significant challenge. Fake news remains a major issue, as indicated by 77% of respondents, with 51% admitting they struggle to discern truth from falsehood, according to Deloitte's *Digital Consumer Trends 2021* report.
Milgram Experiment 2023. AI Can Encourage Violence
KrzysztoF
Researchers from SWPS University replicated the famous Milgram experiment, in which participants were instructed to inflict pain on another person under the authority’s command. This time, the authority was a robot. It’s the first study showing that people are willing to harm another person when a robot commands them to do so.
How Journalists Use Social Media
Bartłomiej Dwornik
Primarily, they seek inspiration from blogs and, less frequently, from Facebook. They rarely trust what they find, often approaching it with caution. Credibility does not necessarily correlate with attractiveness.