illustration: DALL-EThe development of technology and the increasing importance of social media are transforming the way we consume information. The World Economic Forum’s "Global Risks Report 2025" indicates that as many as 5.5 billion people use the internet, with over 5 billion active on social media. In this digital environment, algorithms play an increasing role in selecting content for users.
Recommendation systems increasingly operate by reinforcing social divisions. Studies show that the more extreme the content, the higher the chance it will be shared. This leads to the creation of information bubbles and limits exposure to alternative viewpoints.
A particularly dangerous trend is the automation of disinformation. Generative AI enables the creation of fake images, videos, and voice recordings, making it easier to manipulate public opinion.
| Threat | Consequences |
|---|---|
| Algorithms reinforcing divisions | Polarization of society |
| Information bubbles | Limited access to diverse sources |
| Automation of disinformation | Increase in manipulated content |
Rising Threats to Information
Disinformation and information manipulation have been identified as the top global threats in both the short and long term. Experts emphasize that the development of generative artificial intelligence (GenAI) has dramatically reduced the cost and time required to produce content.
As noted by the authors of the Global Risks Report 2025, this has resulted in a surge of manipulated materials – from fake videos and doctored voice recordings to AI-generated texts. A particularly troubling issue is that social media algorithms often favor controversial or shocking content, further exacerbating disinformation.
Media in an Era of Low Trust
Global trust in the media is at its lowest in years. In a survey covering 47 countries, only 40% of respondents stated that they trust most news sources. This is a result not only of disinformation but also of growing social and political divisions that influence how facts are perceived. In high-income countries, concerns about the effects of disinformation are more pronounced than in developing nations. In 13 countries, including Germany, India, and Canada, disinformation has been ranked among the top five threats.
Meanwhile, technology is further eroding traditional mechanisms of information verification. Artificial intelligence can generate hyper-realistic content that is difficult to distinguish from authentic materials. The result is information chaos, undermining the foundations of public debate.
| Problem | Consequences |
|---|---|
| Spread of disinformation | Undermining trust in credible media |
| Algorithms promoting controversy | Polarization of public opinion |
| Generative AI in the hands of disinformers | Increase in manipulated content |
Beyond false information, censorship is also becoming a problem – both that imposed by governments and decisions made by technology corporations.
Censorship and Digital Surveillance
Governments and corporations today have greater access to user data than ever before. The digitization of administration allows authorities to amass vast amounts of information about citizens – from tax records to voter registries. The report highlights that, in many cases, private companies have better insight into user data than governments themselves.
The issue of digital surveillance and media repressions are particularly pronounced in East Asia, Latin America, and the Caribbean. In these regions, censorship and citizen monitoring are rising rapidly in the rankings of global threats. For example, in Nicaragua, digital surveillance has already become the country’s fourth-largest risk.
At the same time, technology companies are exerting increasing influence over public debate. On one hand, social media platforms remove content deemed dangerous; on the other, they often arbitrarily block information, leading to accusations of bias and violations of free speech.
Fake News as a Political Weapon
Disinformation has become an element of information warfare between nations. Election interference, economic destabilization, and public opinion manipulation are actions that can be carried out by governments and organizations.
The biggest threats associated with fake news include:
- Creating false political narratives – for example, spreading manipulated information before elections.
- Fueling social tensions – reinforcing conflicts between different social groups.
- Reputation attacks – campaigns aimed at discrediting public figures.
The World Economic Forum suggests that combating disinformation requires better labeling of AI-generated content and greater transparency regarding the algorithms used by social media platforms.
Education as the Key to Fighting Disinformation
Experts indicate that one of the most effective ways to counter disinformation is digital education. Citizens should have a better understanding of how social media works, how information manipulation occurs, and how to protect their data.
The most important areas of education include:
- Identifying fake news – learning critical thinking and source verification.
- Awareness of algorithms – understanding how social media platforms shape information.
- Privacy protection – knowledge of security settings and encryption methods.
Without improving public awareness, the digital information ecosystem may become even more chaotic, making disinformation increasingly difficult to detect.
The full Global Risks Report 2025 is available at:
https://www.marshmclennan.com/insights/publications/2025/january/global-risks-report.html
COMMERCIAL BREAK
New articles in section Media industry
Journalism in the age of AI. Why people prefer humans over machines
Krzysztof Fiedorek
Only 12% of people accept news created solely by AI, while 62% prefer those written by humans. At the same time, only 19% notice labels indicating the use of artificial intelligence, while younger audiences ask AI to explain the content to them. These are the findings of the Reuters Institute report on artificial intelligence in media.
Why do we believe fakes? Science reveals the psychology of virals
KFi
Why do emotions grab more attention than evidence, and why can a fake authority overshadow scientific data? Researchers from Warsaw University of Technology, Jagiellonian University, and SWPS University in Poland sought the answers. Here are their findings.
Investigative journalism in Europe. Newsrooms face pressure
KFi, Newseria
Media and political representatives point to the difficult situation of investigative journalism in Europe. Newsrooms are reluctant to invest in this segment due to high costs and the large amount of time and effort required. Most of all, however, they fear legal proceedings.
See articles on a similar topic:
Models of Journalistic Organizations
Zenon Kuczera
An overview, operational principles, and characteristics of journalistic organizations operating in Belgium, Canada, Switzerland, and the United States.
Algorithmic personalization study. Who and how understands digital media
KFi
Most internet users believe that everyone sees the same content online. Meanwhile, algorithms personalize messages so effectively that a young woman with higher education receives different information than her father. Researchers reveal who truly understands the mechanisms.
Zero-click search 2025. The even bigger end of clicking in search engines
Bartłomiej Dwornik
Google is giving up its role as a web signpost. More and more, it wants to be the destination of the whole journey. ChatGPT and Perplexity are hot on its heels, changing the rules of the search game. AI Overviews is a card from the same deck. Only content creators are losing ground in this race.
Read digital newspapers and magazines in PDF, EPUB and MOBI [LINK]
AUTOPROMOCJA Reporterzy.info
The most popular daily, weekly, biweekly and monthly magazines in electronic PDF, EPUB and MOBI formats. For reading on a computer, smartphone and e-reader. The latest issues, archive issues and subscriptions are just a few clicks away. Visit our Reporterzy.info Store




























