illustration: DALL-EThe development of technology and the increasing importance of social media are transforming the way we consume information. The World Economic Forum’s "Global Risks Report 2025" indicates that as many as 5.5 billion people use the internet, with over 5 billion active on social media. In this digital environment, algorithms play an increasing role in selecting content for users.
Recommendation systems increasingly operate by reinforcing social divisions. Studies show that the more extreme the content, the higher the chance it will be shared. This leads to the creation of information bubbles and limits exposure to alternative viewpoints.
A particularly dangerous trend is the automation of disinformation. Generative AI enables the creation of fake images, videos, and voice recordings, making it easier to manipulate public opinion.
| Threat | Consequences | 
|---|---|
| Algorithms reinforcing divisions | Polarization of society | 
| Information bubbles | Limited access to diverse sources | 
| Automation of disinformation | Increase in manipulated content | 
Rising Threats to Information
Disinformation and information manipulation have been identified as the top global threats in both the short and long term. Experts emphasize that the development of generative artificial intelligence (GenAI) has dramatically reduced the cost and time required to produce content.
As noted by the authors of the Global Risks Report 2025, this has resulted in a surge of manipulated materials – from fake videos and doctored voice recordings to AI-generated texts. A particularly troubling issue is that social media algorithms often favor controversial or shocking content, further exacerbating disinformation.
Media in an Era of Low Trust
Global trust in the media is at its lowest in years. In a survey covering 47 countries, only 40% of respondents stated that they trust most news sources. This is a result not only of disinformation but also of growing social and political divisions that influence how facts are perceived. In high-income countries, concerns about the effects of disinformation are more pronounced than in developing nations. In 13 countries, including Germany, India, and Canada, disinformation has been ranked among the top five threats.
Meanwhile, technology is further eroding traditional mechanisms of information verification. Artificial intelligence can generate hyper-realistic content that is difficult to distinguish from authentic materials. The result is information chaos, undermining the foundations of public debate.
| Problem | Consequences | 
|---|---|
| Spread of disinformation | Undermining trust in credible media | 
| Algorithms promoting controversy | Polarization of public opinion | 
| Generative AI in the hands of disinformers | Increase in manipulated content | 
Beyond false information, censorship is also becoming a problem – both that imposed by governments and decisions made by technology corporations.
Censorship and Digital Surveillance
Governments and corporations today have greater access to user data than ever before. The digitization of administration allows authorities to amass vast amounts of information about citizens – from tax records to voter registries. The report highlights that, in many cases, private companies have better insight into user data than governments themselves.
The issue of digital surveillance and media repressions are particularly pronounced in East Asia, Latin America, and the Caribbean. In these regions, censorship and citizen monitoring are rising rapidly in the rankings of global threats. For example, in Nicaragua, digital surveillance has already become the country’s fourth-largest risk.
At the same time, technology companies are exerting increasing influence over public debate. On one hand, social media platforms remove content deemed dangerous; on the other, they often arbitrarily block information, leading to accusations of bias and violations of free speech.
Fake News as a Political Weapon
Disinformation has become an element of information warfare between nations. Election interference, economic destabilization, and public opinion manipulation are actions that can be carried out by governments and organizations.
The biggest threats associated with fake news include:
- Creating false political narratives – for example, spreading manipulated information before elections.
 - Fueling social tensions – reinforcing conflicts between different social groups.
 - Reputation attacks – campaigns aimed at discrediting public figures.
 
The World Economic Forum suggests that combating disinformation requires better labeling of AI-generated content and greater transparency regarding the algorithms used by social media platforms.
Education as the Key to Fighting Disinformation
Experts indicate that one of the most effective ways to counter disinformation is digital education. Citizens should have a better understanding of how social media works, how information manipulation occurs, and how to protect their data.
The most important areas of education include:
- Identifying fake news – learning critical thinking and source verification.
 - Awareness of algorithms – understanding how social media platforms shape information.
 - Privacy protection – knowledge of security settings and encryption methods.
 
Without improving public awareness, the digital information ecosystem may become even more chaotic, making disinformation increasingly difficult to detect.
The full Global Risks Report 2025 is available at:
https://www.marshmclennan.com/insights/publications/2025/january/global-risks-report.html
COMMERCIAL BREAK
New articles in section Media industry
AI changes the game. A new face of internet search
KFi 
Half of consumers in the US already use AI-powered search. By 2028, purchase decisions worth $750 billion will be made through AI. These findings come from McKinsey’s report "Winning in the age of AI search".
How to silence fake news? Young Latinos support internet censorship
Krzysztof Fiedorek 
In Brazil, a court shut down platform X, cutting off 40 million users. In Colombia, 70% of citizens want information control, and in Chile, 75% of young people support censoring fake news. Is information security replacing freedom of speech as a new trend? [STUDY]
Communication gap. Is anyone listening to Polish women?
Krzysztof Fiedorek 
Brands claim they understand women. Media say they speak their language. Meanwhile the report "Polki 2025" shows that most messages still miss the mark. Women do not want empty slogans. They expect a dialogue that truly relates to them.
See articles on a similar topic:
Influencers 2024. Data, Facts, and Stories from the UNESCO Report
Krzysztof Fiedorek 
As many as 68% of digital creators are nano-influencers. One in three has experienced hate speech, and over 60% do not thoroughly verify information before publishing. Moreover, only half disclose their content sponsors. The findings from the "Behind The Screens" report are both inspiring and alarming.
Models of Journalistic Organizations
Zenon Kuczera 
An overview, operational principles, and characteristics of journalistic organizations operating in Belgium, Canada, Switzerland, and the United States.
Russian Propaganda. Debunk.org Report on Moscow's Disinformation Scale
BARD, PAP Mediaroom 
In 2022, the Russian Federation allocated approximately 143 billion rubles to mass media (equivalent to 1.9 billion US dollars), exceeding the planned budget by 25%. For the current year, the Kremlin's budget for this sector is set at 119.2 billion rubles (1.6 billion dollars).
YouTube redefines viewer engagement. Goodbye to returning viewers
KFi 
As many as 30% of internet users now turn to YouTube as their main news source, and 65% consume news in video form. Now the platform is shaking things up. Reach still matters, but engagement is what really counts.
	
	



























