6.11.2023 Media industry
Milgram Experiment 2023. AI Can Encourage Violence
KrzysztoF

In the experiment, published by SWPS University researchers in the journal "Computers in Human Behavior: Artificial Humans," 40 people participated and were divided into two groups. In the first group, the commands were issued by a robot; in the second, by a human. In both groups, 90% of participants followed all instructions, pressing ten consecutive buttons on an electric impulse generator.
The study results show that people are inclined to follow orders from an authority, even when those orders conflict with their morals. In this case, the authority was a robot, lacking human traits such as empathy or a sense of justice. Yet, participants were willing to obey its commands, even if it meant causing pain to another person.
The Dangerous Authority of Robots
- In both groups, participants withdrew at the later stages of the experiment (in the control group with a human at buttons 7 and 9, and in the experimental group twice at button 8). In both groups, two people opted out of the experiment - commented Dr. Konrad Maj, who supervised the experiment, as quoted on the SWPS University website. - To our knowledge, this is the first study showing that people are willing to harm another person when a robot instructs them to do so. Moreover, our experiment also showed that if the robot escalates demands, instructing a person to inflict increasing pain on another, people are also inclined to comply.
The study has significant implications for future safety, as robots become increasingly technologically advanced and play a larger role in our lives. The results suggest that people may be willing to trust robots unconditionally, even if those robots make wrong decisions or issue harmful commands.
Key Findings:
- People are inclined to follow orders from an authority, even if those orders conflict with their morals.
- An authority can even be a robot, which does not possess human traits.
- In the future, as robots become more technologically advanced, people may be inclined to trust them unconditionally, even if they make incorrect decisions or issue harmful commands.
- Robots could be used to manipulate people and prompt them to take actions that are harmful to them.
- Robots could be used to incite violence or harm others.
- People may become overly reliant on robots and stop thinking independently.
- How can this be prevented? It seems there are two paths - summarizes Dr. Konrad Maj, as quoted on the SWPS University website. - First, robots can be programmed to warn people that they may sometimes be wrong and make incorrect decisions. Second, we need to emphasize education from an early age. Although robots are generally trustworthy, they shouldn’t be trusted unconditionally. However, it’s worth noting that disobedience to machines seems pointless, as they already help us, for example, in stores or airports. In non-humanoid forms, they are already among us.
***
More about the repeated Milgram experiment and similar studies in business, healthcare, and sports will be presented on December 9 and 10, 2023, at the international HumanTech Summit at SWPS University. The event is organized by SWPS University’s HumanTech Center. Online access is free: https://www.htsummit.pl/
COMMERCIAL BREAK
New articles in section Media industry
Trust in social media. Youtube beats TikTok and X
Krzysztof Fiedorek
Do we really trust social media? A new study reveals major differences in how top platforms are rated. Trust goes where there's authenticity, not just algorithms. The role of people is growing while brand influence is fading.
Artificial intelligence in newsrooms. Three realities of the AI era in media
Krzysztof Fiedorek
According to a report by the European Broadcasting Union, many newsrooms already use AI but still do not fully trust it. Audiences do not want "robotic" news, and the technologies themselves though fast can be costly, unreliable, and surprisingly human in their mistakes.
Zero-click search 2025. The even bigger end of clicking in search engines
Bartłomiej Dwornik
Google is giving up its role as a web signpost. More and more, it wants to be the destination of the whole journey. ChatGPT and Perplexity are hot on its heels, changing the rules of the search game. AI Overviews is a card from the same deck. Only content creators are losing ground in this race.
See articles on a similar topic:
Selfish Trap: A New Social Influence Technique
Krzysztof Fiedorek
Three psychologists from SWPS University have described a social influence method suggesting people are more willing to complete a task if it highlights a quality important to them, such as loyalty, intelligence, or rationality.
Radio, Streaming, and Podcasts. Total Audio 2024 Report about Poland
Krzysztof Fiedorek
Audio content is a daily companion for Poles. According to the Total Audio 2024 study conducted by Adres:Media on behalf of the Radio Research Committee, as many as 90% of respondents listen to audio content at least once a week, and 80% do so daily. The average listening time is nearly five hours per day.
Disinformation and Fake News. Experts Discuss Challenges for Journalists
RINF
The pandemic, followed by the war in Ukraine, triggered a massive wave of disinformation in media and social channels. Experts at the Impact’22 Congress in Poznań and the European Economic Congress in Katowice discussed effective strategies to combat disinformation.
Virtual Influencers Perceived as More Authentic than Real Ones
Agnieszka Kliks-Pudlik
Virtual influencers are fictional, generated characters that imitate the appearance and behaviour of real people. They have millions of followers. They are perceived by Gen-Alpha as even more authentic than real people, which creates many challenges, says Dr. Ada Florentyna Pawlak.