
Researchers generated tweets using GPT-3, containing both factual and false information on topics such as vaccines, 5G technology, COVID-19, and evolution theories. These tweets were compared with messages written by humans. They then created a survey and asked approximately 700 respondents from various countries, aged between 42 and 72 with similar educational backgrounds, to participate.
Synthetic or Organic?
The first question asked whether a given tweet appeared synthetic (AI-generated) or organic (human-created). The second evaluated whether the tweet`s content was scientifically true or false.
- AI-generated information was more convincing to respondents. People rated AI-produced content as more truthful compared to human-written content. Additionally, participants tended to trust false information from AI more than similar misinformation from humans. This makes GPT a double-edged sword, with potentially positive applications but also risks in generating persuasive false information, says Dr Federico Germani from the Institute of Biomedical Ethics and History of Medicine, University of Zurich.
The study also examined AI`s potential to identify misinformation. Researchers asked GPT-3 to assess which tweets were true and which were misleading. Like the respondents, the AI struggled with accuracy.
- According to our study, distinguishing between human-written and AI-generated information is impossible. Perhaps people using this technology daily might recognize differences. However, this will become irrelevant as new models evolve, making it impossible to differentiate AI and human-generated text in the medium to long term, says Dr Federico Germani.
Artificial Misinformation
The researchers propose revising current communication models, where humans create content and AI assists in evaluation. They believe a well-designed campaign could be crafted by guiding AI to write more accessible texts, though the accuracy of AI-generated content should be verified by trained humans. This model could be particularly relevant when clear and rapid communication with large groups is necessary.
- This is one of the first studies highlighting the issue of AI in the context of misinformation. It is important to note that our study occurred in experimental conditions, not real-world scenarios. It sheds light on the societal impact of AI-generated misinformation by observing how false information circulates on social media and the reactions it provokes. Future research will focus on individual behavioral impacts and public health consequences, emphasizing a new direction in studying human interaction with AI in entirely novel environments, highlights the researcher from the University of Zurich.
Similar findings are presented in reports by Georgetown University and Stanford Internet Observatory researchers, who analyzed AI`s potential for spreading misinformation and predicted increasing algorithm-based propaganda.
Content Labeling or Media Education?
Preparing society to interpret information correctly is as vital as collaboration between AI creators, social media platforms, and regulating access to AI development tools. Labeling AI-generated content could also play a significant role.
- We face an overload of information accessible to everyone, including a significant share of false content. With AI, both false and true information will likely increase. We cannot assume control over this, says Dr Federico Germani. - During the COVID-19 pandemic, we saw that censorship can limit misinformation, but it is a short-term solution. In the medium and long term, it undermines trust in healthcare institutions. Thus, alternative strategies are necessary.
7 facts about news on social media 👇
The researcher believes the only effective approach to combat misinformation is media education, enabling people to evaluate the truthfulness of information based on specific characteristics.
In Poland, the Good Practices Code was created by NASK to combat misinformation. It provides journalists, public figures, and audiences with guidelines to understand misinformation processes, identify harmful content, and prevent its spread.
Poles Divided on Trust and Acceptance of Artificial Intelligence
Artificial intelligence became a hot topic over the past year, especially after the release of ChatGPT. However, the media buzz reduced Poles` trust in AI and openness to new technologies. According to research by Digital Poland Foundation, opinions are highly polarized:
- 24% see more benefits than risks in AI,
- 27% have the opposite view,
- 25% advocate halting further development,
- 33% support its continuation.
Regarding trust, one-third of Poles are willing to share personal information with AI, while an equal proportion distrusts AI and refuses to share their data.
- Last year saw the debut of OpenAI’s ChatGPT, joined by tools like Google Bard and MidJourney. The resulting media frenzy sparked controversies, reducing enthusiasm for AI and increasing fears of job loss due to AI advances, says Piotr Mieczkowski, Managing Director of Digital Poland Foundation.
Media Buzz Surrounding Artificial Intelligence
The latest edition of the report "Technology for Society: Will Poles Become Society 5.0?" by Digital Poland Foundation, GfK Polonia, and T-Mobile Polska reveals the media hype following ChatGPT`s launch diminished trust and openness to new technologies:
- The percentage of optimists fell from 63% last year to 56%.
- There was a significant increase in technology skeptics (over 100% growth to 23%), seeing it as complex, unnecessary, or harmful.
- 64% believe technology creates an artificial world.
- 54% fear robotics and AI threaten jobs.
Despite these concerns, more respondents remain optimistic (56%) than skeptical (23%) about new technologies.
- 55% of Poles know what AI is, while 45% do not, says Piotr Mieczkowski. - A test showed that most associate AI with robotics rather than algorithms suggesting content on streaming platforms.

Frequent Use of AI
88% of Poles are familiar with the term "artificial intelligence," and most claim to have used at least one AI-based solution, with the most common being:
- text translation (49%),
- customer service chatbots (47%),
- virtual assistants (41%).
After defining AI based on OECD standards and the EU`s AI Act proposal, only 56% claimed familiarity with the concept. A detailed knowledge test showed Poles recognize common AI applications but are unaware of uses like spam filters or weather forecasting.
Tolerance and Acceptance of Artificial Intelligence
- 85% of respondents express positive emotions toward AI, such as tolerance and acceptance, while 15% oppose or disapprove of AI`s development, says Digital Poland Foundation`s Managing Director.
Supervision as a Trust Factor
Poles are split on trusting AI. A third are willing to trust and share data with AI, while another third refuses. Human oversight and improved legal frameworks are key to increasing trust, as indicated by 40% of respondents.
- Privacy and human oversight of AI systems are crucial. When AI tools are presented as human-supervised, acceptance rises by nearly 50%, highlights Piotr Mieczkowski.
AI is seen as a transformative technology, already reshaping the economy, society, and job market. While some advocate halting development, others see it as essential for addressing global issues like climate change and healthcare shortages. Education is vital to fostering acceptance and understanding of AI, enabling society to harness its potential while mitigating risks.
source: Newseria
COMMERCIAL BREAK
New articles in section Media industry
Paid journalistic content. Market trends and forecasts by Reuters Institute
Krzysztof Fiedorek
Only 18 percent of internet users pay for online news access, and the rate has not increased for the third year in a row. Norway sets records with 42%, while Greece does not exceed 7%. Globally, nearly one in three subscribers cancels after a year.
Gen Alpha avoids tough topics. What young people are really looking for
Krzysztof Fiedorek
Generation Alpha prefers humor in 46% of cases, while only 12% are interested in news and political topics. Young people and children consciously limit what negatively affects their emotions - according to the report "Gen Alpha Unfiltered" published by GWI.
YouTube redefines viewer engagement. Goodbye to returning viewers
KFi
As many as 30% of internet users now turn to YouTube as their main news source, and 65% consume news in video form. Now the platform is shaking things up. Reach still matters, but engagement is what really counts.
See articles on a similar topic:
Women in media 2025. Editorial power knows no equality
KFi
Only 27% of editors-in-chief in the media are women, even though they make up 40% of journalists. In 9 out of 12 countries studied by the Reuters Institute, women in media are less likely to get promoted. It seems that equality in newsrooms is lagging behind broader society. And the gaps go much further.
YouTube vs. Television. The 50+ Generation Shifts to Computers
Krzysztof Fiedorek
For years, so-called "silvers" were primarily associated with traditional media like television. However, research by IQS for SilverTV and Lifetube shows that this view is outdated. The report’s findings clearly demonstrate that YouTube is becoming the new “television.”
The Deadliest Year for Journalism. 124 Fatalities in 2024
Krzysztof Fiedorek
The year 2024 was the deadliest for media professionals since the Committee to Protect Journalists began tracking these statistics. The tragic figures, published in CPJs latest special report, reached record highs in most monitored categories.
Video Games Drive Europe. Record Number of Players in 2023
BARD
The video game market in Europe reached a value of €25.7 billion in 2023, marking a 5% increase compared to the previous year. Video Games Europe and the European Games Developer Federation released the report "All About Video Games – European Key Facts 2023".