23.10.2023 Media market
We Trust AI-Generated Fake News More Than Human-Created News
KrzysztoF
Generating and spreading misinformation with AI can negatively affect various areas of life, including global healthcare. To examine how AI-created text impacts the comprehension of information, researchers from the University of Zurich analyzed tweets generated by GPT-3.
Researchers generated tweets using GPT-3, containing both factual and false information on topics such as vaccines, 5G technology, COVID-19, and evolution theories. These tweets were compared with messages written by humans. They then created a survey and asked approximately 700 respondents from various countries, aged between 42 and 72 with similar educational backgrounds, to participate.
Synthetic or Organic?
The first question asked whether a given tweet appeared synthetic (AI-generated) or organic (human-created). The second evaluated whether the tweet`s content was scientifically true or false.
- AI-generated information was more convincing to respondents. People rated AI-produced content as more truthful compared to human-written content. Additionally, participants tended to trust false information from AI more than similar misinformation from humans. This makes GPT a double-edged sword, with potentially positive applications but also risks in generating persuasive false information, says Dr Federico Germani from the Institute of Biomedical Ethics and History of Medicine, University of Zurich.
The study also examined AI`s potential to identify misinformation. Researchers asked GPT-3 to assess which tweets were true and which were misleading. Like the respondents, the AI struggled with accuracy.
- According to our study, distinguishing between human-written and AI-generated information is impossible. Perhaps people using this technology daily might recognize differences. However, this will become irrelevant as new models evolve, making it impossible to differentiate AI and human-generated text in the medium to long term, says Dr Federico Germani.
Artificial Misinformation
The researchers propose revising current communication models, where humans create content and AI assists in evaluation. They believe a well-designed campaign could be crafted by guiding AI to write more accessible texts, though the accuracy of AI-generated content should be verified by trained humans. This model could be particularly relevant when clear and rapid communication with large groups is necessary.
- This is one of the first studies highlighting the issue of AI in the context of misinformation. It is important to note that our study occurred in experimental conditions, not real-world scenarios. It sheds light on the societal impact of AI-generated misinformation by observing how false information circulates on social media and the reactions it provokes. Future research will focus on individual behavioral impacts and public health consequences, emphasizing a new direction in studying human interaction with AI in entirely novel environments, highlights the researcher from the University of Zurich.
Similar findings are presented in reports by Georgetown University and Stanford Internet Observatory researchers, who analyzed AI`s potential for spreading misinformation and predicted increasing algorithm-based propaganda.
Content Labeling or Media Education?
Preparing society to interpret information correctly is as vital as collaboration between AI creators, social media platforms, and regulating access to AI development tools. Labeling AI-generated content could also play a significant role.
- We face an overload of information accessible to everyone, including a significant share of false content. With AI, both false and true information will likely increase. We cannot assume control over this, says Dr Federico Germani. - During the COVID-19 pandemic, we saw that censorship can limit misinformation, but it is a short-term solution. In the medium and long term, it undermines trust in healthcare institutions. Thus, alternative strategies are necessary.
The researcher believes the only effective approach to combat misinformation is media education, enabling people to evaluate the truthfulness of information based on specific characteristics.
In Poland, the Good Practices Code was created by NASK to combat misinformation. It provides journalists, public figures, and audiences with guidelines to understand misinformation processes, identify harmful content, and prevent its spread.
Poles Divided on Trust and Acceptance of Artificial Intelligence
Artificial intelligence became a hot topic over the past year, especially after the release of ChatGPT. However, the media buzz reduced Poles` trust in AI and openness to new technologies. According to research by Digital Poland Foundation, opinions are highly polarized:
- 24% see more benefits than risks in AI,
- 27% have the opposite view,
- 25% advocate halting further development,
- 33% support its continuation.
Regarding trust, one-third of Poles are willing to share personal information with AI, while an equal proportion distrusts AI and refuses to share their data.
- Last year saw the debut of OpenAI’s ChatGPT, joined by tools like Google Bard and MidJourney. The resulting media frenzy sparked controversies, reducing enthusiasm for AI and increasing fears of job loss due to AI advances, says Piotr Mieczkowski, Managing Director of Digital Poland Foundation.
Media Buzz Surrounding Artificial Intelligence
The latest edition of the report "Technology for Society: Will Poles Become Society 5.0?" by Digital Poland Foundation, GfK Polonia, and T-Mobile Polska reveals the media hype following ChatGPT`s launch diminished trust and openness to new technologies:
- The percentage of optimists fell from 63% last year to 56%.
- There was a significant increase in technology skeptics (over 100% growth to 23%), seeing it as complex, unnecessary, or harmful.
- 64% believe technology creates an artificial world.
- 54% fear robotics and AI threaten jobs.
Despite these concerns, more respondents remain optimistic (56%) than skeptical (23%) about new technologies.
- 55% of Poles know what AI is, while 45% do not, says Piotr Mieczkowski. - A test showed that most associate AI with robotics rather than algorithms suggesting content on streaming platforms.
Frequent Use of AI
88% of Poles are familiar with the term "artificial intelligence," and most claim to have used at least one AI-based solution, with the most common being:
- text translation (49%),
- customer service chatbots (47%),
- virtual assistants (41%).
After defining AI based on OECD standards and the EU`s AI Act proposal, only 56% claimed familiarity with the concept. A detailed knowledge test showed Poles recognize common AI applications but are unaware of uses like spam filters or weather forecasting.
Tolerance and Acceptance of Artificial Intelligence
- 85% of respondents express positive emotions toward AI, such as tolerance and acceptance, while 15% oppose or disapprove of AI`s development, says Digital Poland Foundation`s Managing Director.
Supervision as a Trust Factor
Poles are split on trusting AI. A third are willing to trust and share data with AI, while another third refuses. Human oversight and improved legal frameworks are key to increasing trust, as indicated by 40% of respondents.
- Privacy and human oversight of AI systems are crucial. When AI tools are presented as human-supervised, acceptance rises by nearly 50%, highlights Piotr Mieczkowski.
AI is seen as a transformative technology, already reshaping the economy, society, and job market. While some advocate halting development, others see it as essential for addressing global issues like climate change and healthcare shortages. Education is vital to fostering acceptance and understanding of AI, enabling society to harness its potential while mitigating risks.
source: Newseria
COMMERCIAL BREAK
See articles on a similar topic:
Video Games Drive Europe. Record Number of Players in 2023
BARD
The video game market in Europe reached a value of €25.7 billion in 2023, marking a 5% increase compared to the previous year. Video Games Europe and the European Games Developer Federation released the report "All About Video Games – European Key Facts 2023".
New Technologies in Journalism. PressInstitute Study
BARD
Nearly 39% of journalists use their smartphone or tablet camera to record videos, while over 26% use the built-in camera to take photos that they later publish, according to the "Journalists and New Technologies" study by PressInstitute.
Streaming Services. Rapid Growth of Subscribers in Poland
RINF
Compared to 2020, 20% more Poles declared having a subscription to video streaming services in 2021, according to the *Digital Consumer Trends 2021* report published by Deloitte.
Digital Press Reading Habits
Bartłomiej Dwornik
What time of day do we most often reach for e-newspapers and e-books? According to a study by Legimi, peak times are between 6 p.m. and 11 p.m. It’s time to dismiss the notion that weekends are our favorite reading days.
Models of Journalistic Organizations
Zenon Kuczera
An overview, operational principles, and characteristics of journalistic organizations operating in Belgium, Canada, Switzerland, and the United States.
Radio Fanatics. Who Listens for One-Third of the Day?
Bartłomiej Dwornik
One in five listeners now spends over 8 hours daily listening to the radio, according to data from the Radio Track study. Since the beginning of the year, the number of these avid listeners has grown by 300,000.
COVID-Skeptics in Media. Dentsu Agency Study
BARD
A significant presence of COVID-skepticism, which downplays the pandemic or focuses on conspiracy theories, accounts for approximately 8% of online content related to the coronavirus. The primary sources of knowledge about COVID-19 and the current situation are the internet, social media, television, and increasingly, family and friends.