
In European newsrooms that not long ago were skeptical about automation, artificial intelligence is now firmly established. But as shown in the report "Leading Newsrooms in the Age of Generative AI", published by the European Broadcasting Union (EBU), not everything intelligent brings true value to newsrooms.
Newsrooms use AI but still do not fully trust it
Everyday journalistic tasks are increasingly supported by generative AI. Newsrooms use it for translations, transcriptions, video subtitles, or content personalization. The speed and quality of these tasks have improved significantly. BBC uses AI to create local football match reports based on radio commentary. Swedish SR offers chatbots that answer user questions using only its own verified materials.
While the technology performs well in back-end tasks, concerns remain when it comes to direct audience contact. "We are better with language subtleties than before, but we still do not trust AI when it comes to political news or investigative content" says one manager quoted in the report.
List of areas where newsrooms most often use AI:
- translations and transcriptions (e.g., Finnish Yle restored its Russian-language service)
- automated video subtitles (Radio France works with deaf associations)
- local content personalization (Bayerischer Rundfunk lets users tailor news to their region)
- comment moderation and "discussion summaries" (BR tool)
Newsrooms stress that the "human factor" remains essential. Without journalists, generated content often loses context or contains errors.
No strategy no results AI is not a cheap shortcut
AI experiments require significant time, staff, and financial resources. While large newsrooms can afford in-house AI labs and negotiate with tech providers, smaller ones must rely on off-the-shelf solutions. Meanwhile, even the largest institutions have not seen savings yet. As Anne Lagercrantz, Director General of Swedish SVT, puts it "We have improved efficiency, but not reduced costs. For now, everything is more expensive".
Area | Measured regularly? |
---|---|
Time saved by AI | No |
Journalistic quality | Rarely |
Impact on audience engagement | Occasionally |
Implementation and maintenance cost | No regular indicators |
Source: EBU News Report 2025
Most newsrooms do not conduct full cost-benefit analysis of AI deployment. There is also a lack of unified success metrics. Making investment decisions based only on enthusiasm or tech pressure often leads to disappointment.
Experts advise holding off on costly implementations if the technology does not offer a clear advantage. Edmundo Ortega, AI strategy expert, emphasizes "If you cannot point to real value a feature brings to your organization, wait. Something better and cheaper is just around the corner".
The audience does not want to know it is AI they want better journalism
Another challenge is how users perceive AI. Opinions are divided. While audiences accept AI in technical tasks like subtitles or translations, they do not want it replacing journalists in political coverage or local news. Many respondents also expressed fear that automation will lead to layoffs and weaken the media`s watchdog role.
Sample user reactions to content labeled as "generated by AI":
- "If AI did this, why do we need reporters?"
- "I don`t care what you use I want reliable information"
- "Journalism is not just information it is also empathy and responsibility"
Labeling content as AI-assisted often creates distrust, and sometimes anger. Yle stopped using such labels after negative reader reactions. Many newsrooms now choose a selective approach informing about AI use only when it may mislead the audience such as in the case of generated images or cloned voices.
That does not mean AI is losing relevance. Quite the opposite. As Minna Mustakallio from Yle emphasizes "People are not interested in AI. They care whether they are getting better journalism. And that is where we should focus".
* * *
The report "Leading Newsrooms in the Age of Generative AI" is based on a series of in-depth interviews with newsroom leaders across Europe. It was prepared by Alexandra Borchardt a media innovation expert affiliated with the Reuters Institute in Oxford, in collaboration with Olle Zachrison, Director of AI at Sveriges Radio, and Kati Bremme, Head of Innovation at France Télévisions. The authors were supported by Belén López and Yolène Johanny. The full material is available on the European Broadcasting Union website.
COMMERCIAL BREAK
New articles in section Media industry
Cyberviolence and hate disguised as a joke. The RAYUELA report on youth
Krzysztof Fiedorek
The study conducted in five countries reveals a harsh truth. Online violence is not evenly distributed. It is a digital map of prejudice that hurts the most those who stand out the most. "It’s just a joke." That’s how violence often begins. Young people go through it in silence.
Trust in social media. Youtube beats TikTok and X
Krzysztof Fiedorek
Do we really trust social media? A new study reveals major differences in how top platforms are rated. Trust goes where there's authenticity, not just algorithms. The role of people is growing while brand influence is fading.
Zero-click search 2025. The even bigger end of clicking in search engines
Bartłomiej Dwornik
Google is giving up its role as a web signpost. More and more, it wants to be the destination of the whole journey. ChatGPT and Perplexity are hot on its heels, changing the rules of the search game. AI Overviews is a card from the same deck. Only content creators are losing ground in this race.
See articles on a similar topic:
Yellow Press. What is Yellow Journalism?
Krzysztof Fiedorek
The terms "yellow press" and "yellow journalism" are often used pejoratively to describe journalistic practices focused on sensationalism, gossip, and emotions rather than objective facts. Let’s explore their origins, distinctive features, and impact on society.
Milgram Experiment 2023. AI Can Encourage Violence
KrzysztoF
Researchers from SWPS University replicated the famous Milgram experiment, in which participants were instructed to inflict pain on another person under the authority’s command. This time, the authority was a robot. It’s the first study showing that people are willing to harm another person when a robot commands them to do so.
Gen Z Will Force Brands to Tell the Truth. GWI Report and Forecasts
Krzysztof Fiedorek
They value authenticity and brand transparency, preferring socially engaged companies. Young people see technology as a tool for growth, not just entertainment. In relationships, they prioritize genuine connections despite being highly active online. What do we know about Gen Z, and what does this mean for marketing? And beyond.
We Trust AI-Generated Fake News More Than Human-Created News
KrzysztoF
Generating and spreading misinformation with AI can negatively affect various areas of life, including global healthcare. To examine how AI-created text impacts the comprehension of information, researchers from the University of Zurich analyzed tweets generated by GPT-3.