illustration: bing.com/createResearchers generated tweets using GPT-3, containing both factual and false information on topics such as vaccines, 5G technology, COVID-19, and evolution theories. These tweets were compared with messages written by humans. They then created a survey and asked approximately 700 respondents from various countries, aged between 42 and 72 with similar educational backgrounds, to participate.
Synthetic or Organic?
The first question asked whether a given tweet appeared synthetic (AI-generated) or organic (human-created). The second evaluated whether the tweet`s content was scientifically true or false.
- AI-generated information was more convincing to respondents. People rated AI-produced content as more truthful compared to human-written content. Additionally, participants tended to trust false information from AI more than similar misinformation from humans. This makes GPT a double-edged sword, with potentially positive applications but also risks in generating persuasive false information, says Dr Federico Germani from the Institute of Biomedical Ethics and History of Medicine, University of Zurich.
The study also examined AI`s potential to identify misinformation. Researchers asked GPT-3 to assess which tweets were true and which were misleading. Like the respondents, the AI struggled with accuracy.
- According to our study, distinguishing between human-written and AI-generated information is impossible. Perhaps people using this technology daily might recognize differences. However, this will become irrelevant as new models evolve, making it impossible to differentiate AI and human-generated text in the medium to long term, says Dr Federico Germani.
Artificial Misinformation
The researchers propose revising current communication models, where humans create content and AI assists in evaluation. They believe a well-designed campaign could be crafted by guiding AI to write more accessible texts, though the accuracy of AI-generated content should be verified by trained humans. This model could be particularly relevant when clear and rapid communication with large groups is necessary.
- This is one of the first studies highlighting the issue of AI in the context of misinformation. It is important to note that our study occurred in experimental conditions, not real-world scenarios. It sheds light on the societal impact of AI-generated misinformation by observing how false information circulates on social media and the reactions it provokes. Future research will focus on individual behavioral impacts and public health consequences, emphasizing a new direction in studying human interaction with AI in entirely novel environments, highlights the researcher from the University of Zurich.
Similar findings are presented in reports by Georgetown University and Stanford Internet Observatory researchers, who analyzed AI`s potential for spreading misinformation and predicted increasing algorithm-based propaganda.
Content Labeling or Media Education?
Preparing society to interpret information correctly is as vital as collaboration between AI creators, social media platforms, and regulating access to AI development tools. Labeling AI-generated content could also play a significant role.
- We face an overload of information accessible to everyone, including a significant share of false content. With AI, both false and true information will likely increase. We cannot assume control over this, says Dr Federico Germani. - During the COVID-19 pandemic, we saw that censorship can limit misinformation, but it is a short-term solution. In the medium and long term, it undermines trust in healthcare institutions. Thus, alternative strategies are necessary.
The researcher believes the only effective approach to combat misinformation is media education, enabling people to evaluate the truthfulness of information based on specific characteristics.
In Poland, the Good Practices Code was created by NASK to combat misinformation. It provides journalists, public figures, and audiences with guidelines to understand misinformation processes, identify harmful content, and prevent its spread.
Poles Divided on Trust and Acceptance of Artificial Intelligence
Artificial intelligence became a hot topic over the past year, especially after the release of ChatGPT. However, the media buzz reduced Poles` trust in AI and openness to new technologies. According to research by Digital Poland Foundation, opinions are highly polarized:
- 24% see more benefits than risks in AI,
- 27% have the opposite view,
- 25% advocate halting further development,
- 33% support its continuation.
Regarding trust, one-third of Poles are willing to share personal information with AI, while an equal proportion distrusts AI and refuses to share their data.
- Last year saw the debut of OpenAI’s ChatGPT, joined by tools like Google Bard and MidJourney. The resulting media frenzy sparked controversies, reducing enthusiasm for AI and increasing fears of job loss due to AI advances, says Piotr Mieczkowski, Managing Director of Digital Poland Foundation.
Media Buzz Surrounding Artificial Intelligence
The latest edition of the report "Technology for Society: Will Poles Become Society 5.0?" by Digital Poland Foundation, GfK Polonia, and T-Mobile Polska reveals the media hype following ChatGPT`s launch diminished trust and openness to new technologies:
- The percentage of optimists fell from 63% last year to 56%.
- There was a significant increase in technology skeptics (over 100% growth to 23%), seeing it as complex, unnecessary, or harmful.
- 64% believe technology creates an artificial world.
- 54% fear robotics and AI threaten jobs.
Despite these concerns, more respondents remain optimistic (56%) than skeptical (23%) about new technologies.
- 55% of Poles know what AI is, while 45% do not, says Piotr Mieczkowski. - A test showed that most associate AI with robotics rather than algorithms suggesting content on streaming platforms.
Ads know you before you see them. Say hello to predictive AI analytics 👇
Frequent Use of AI
88% of Poles are familiar with the term "artificial intelligence," and most claim to have used at least one AI-based solution, with the most common being:
- text translation (49%),
- customer service chatbots (47%),
- virtual assistants (41%).
After defining AI based on OECD standards and the EU`s AI Act proposal, only 56% claimed familiarity with the concept. A detailed knowledge test showed Poles recognize common AI applications but are unaware of uses like spam filters or weather forecasting.
Tolerance and Acceptance of Artificial Intelligence
- 85% of respondents express positive emotions toward AI, such as tolerance and acceptance, while 15% oppose or disapprove of AI`s development, says Digital Poland Foundation`s Managing Director.
Supervision as a Trust Factor
Poles are split on trusting AI. A third are willing to trust and share data with AI, while another third refuses. Human oversight and improved legal frameworks are key to increasing trust, as indicated by 40% of respondents.
- Privacy and human oversight of AI systems are crucial. When AI tools are presented as human-supervised, acceptance rises by nearly 50%, highlights Piotr Mieczkowski.
AI is seen as a transformative technology, already reshaping the economy, society, and job market. While some advocate halting development, others see it as essential for addressing global issues like climate change and healthcare shortages. Education is vital to fostering acceptance and understanding of AI, enabling society to harness its potential while mitigating risks.
source: Newseria
COMMERCIAL BREAK
New articles in section Media industry
Investigative journalism in Europe. Newsrooms face pressure
KFi, Newseria
Media and political representatives point to the difficult situation of investigative journalism in Europe. Newsrooms are reluctant to invest in this segment due to high costs and the large amount of time and effort required. Most of all, however, they fear legal proceedings.
Energy under attack. Disinformation threatens Poland’s power transition
KFi
One in five online messages about energy may be fake. Between 2022 and 2025 nearly 70,000 publications warning and condemning disinformation in this strategic sector were recorded in Polish media. They generated a reach of 1.19 billion impressions.
AI changes the game. A new face of internet search
KFi
Half of consumers in the US already use AI-powered search. By 2028, purchase decisions worth $750 billion will be made through AI. These findings come from McKinsey’s report "Winning in the age of AI search".
See articles on a similar topic:
Review of media from around the world. See what they are buzzing about [LINK]
AUTOPROMOCJA Reporterzy.info
What is the media buzzing about? A review of the headlines of the most important newspapers and websites. Events of the day, country, world, media market, economy, sport, foreign media, and even gossip and curiosities. In real time and 24 hours a day. We invite you!
Future of Public Media. Who Will Be Data Ethicists and VR Designers?
KFi
How does the future of work in media look? Here are professions that do not yet exist but will soon become essential. The report "Future Jobs at PSM: Competencies and Professions for the Media of Tomorrow," prepared by the European Broadcasting Union (EBU) and Rai Ufficio Studi, outlines key changes awaiting the public media sector in the coming years.
The Podcast Market in Poland. Research by Wprost and Tandem Media
Krzysztof Fiedorek
How many Polish internet users listen to podcasts? Where and how do we listen? How and why do we choose episodes? Two major studies on this topic were recently released. One by Wprost, the other by Tandem Media from Agora Radio Group. We present both for data comparison and insights.
Influencers and social video rule information. Digital News Report 2025
Krzysztof Fiedorek
Seconds of vertical clips set the future of news. TikTok, YouTube and an army of influencers pull viewers away from TV sets and newspaper pages. Whoever masters this new pulse seizes not only attention but also control of the story.




























