6.11.2023 Media market
Milgram Experiment 2023. AI Can Encourage Violence
KrzysztoF
Researchers from SWPS University replicated the famous Milgram experiment, in which participants were instructed to inflict pain on another person under the authority’s command. This time, the authority was a robot. It’s the first study showing that people are willing to harm another person when a robot commands them to do so.
In the experiment, published by SWPS University researchers in the journal "Computers in Human Behavior: Artificial Humans," 40 people participated and were divided into two groups. In the first group, the commands were issued by a robot; in the second, by a human. In both groups, 90% of participants followed all instructions, pressing ten consecutive buttons on an electric impulse generator.
The study results show that people are inclined to follow orders from an authority, even when those orders conflict with their morals. In this case, the authority was a robot, lacking human traits such as empathy or a sense of justice. Yet, participants were willing to obey its commands, even if it meant causing pain to another person.
The Dangerous Authority of Robots
- In both groups, participants withdrew at the later stages of the experiment (in the control group with a human at buttons 7 and 9, and in the experimental group twice at button 8). In both groups, two people opted out of the experiment - commented Dr. Konrad Maj, who supervised the experiment, as quoted on the SWPS University website. - To our knowledge, this is the first study showing that people are willing to harm another person when a robot instructs them to do so. Moreover, our experiment also showed that if the robot escalates demands, instructing a person to inflict increasing pain on another, people are also inclined to comply.
The study has significant implications for future safety, as robots become increasingly technologically advanced and play a larger role in our lives. The results suggest that people may be willing to trust robots unconditionally, even if those robots make wrong decisions or issue harmful commands.
Key Findings:
- People are inclined to follow orders from an authority, even if those orders conflict with their morals.
- An authority can even be a robot, which does not possess human traits.
- In the future, as robots become more technologically advanced, people may be inclined to trust them unconditionally, even if they make incorrect decisions or issue harmful commands.
- Robots could be used to manipulate people and prompt them to take actions that are harmful to them.
- Robots could be used to incite violence or harm others.
- People may become overly reliant on robots and stop thinking independently.
advertisement
- How can this be prevented? It seems there are two paths - summarizes Dr. Konrad Maj, as quoted on the SWPS University website. - First, robots can be programmed to warn people that they may sometimes be wrong and make incorrect decisions. Second, we need to emphasize education from an early age. Although robots are generally trustworthy, they shouldn’t be trusted unconditionally. However, it’s worth noting that disobedience to machines seems pointless, as they already help us, for example, in stores or airports. In non-humanoid forms, they are already among us.
***
More about the repeated Milgram experiment and similar studies in business, healthcare, and sports will be presented on December 9 and 10, 2023, at the international HumanTech Summit at SWPS University. The event is organized by SWPS University’s HumanTech Center. Online access is free: https://www.htsummit.pl/
COMMERCIAL BREAK
See articles on a similar topic:
Video Games Drive Europe. Record Number of Players in 2023
BARD
The video game market in Europe reached a value of €25.7 billion in 2023, marking a 5% increase compared to the previous year. Video Games Europe and the European Games Developer Federation released the report "All About Video Games – European Key Facts 2023".
Artificial Intelligence in the Media. Reuters Digital News Report 2024
Krzysztof Fiedorek
AI has gained prominence in recent years, and its application in producing, distributing, and presenting news content continues to grow. However, this development is met with mixed feelings by audiences, which has significant consequences for media trust and its future.
Artificial Intelligence is ALREADY Outperforming Humans in Creativity
Krzysztof Fiedorek
ChatGPT, an AI model based on the GPT-4 engine, achieved better results than the vast majority of students in the standard Torrance Test of Creative Thinking (TTCT), which evaluates creativity. The study was conducted by researchers from the University of Montana.
Media in Poland 2022. How Poles Watch, Listen, Read, and Surf the Web
Krzysztof Fiedorek
Nearly two million Poles have access to a TV but do not watch television. For radio, the analogous group amounts to 8% of radio owners. Two-thirds of Poles reach for printed press, even occasionally, while the number of mobile internet users exceeds desktop users by nearly three million.
Reading Industry Magazines in Poland 2024: PBC Report
Sylwia Markowska
76% of readers of industry magazines are responsible for purchasing decisions in their workplace. To deepen the understanding of the role of industry press and how it is read, PBC surveyed 2051 respondents from 5 different sectors, gaining the latest insights into the reading habits of this segment of the press in Poland.
Decline in Trust in Media. Analysis of the Reuters Digital News Report 2024
Krzysztof Fiedorek
The “Digital News Report 2024” by the Reuters Institute for the Study of Journalism highlights alarming trends concerning the declining interest in news and decreasing trust in media. These changes are not temporary but have become a long-term trend.
How Journalists Use Social Media
Bartłomiej Dwornik
Primarily, they seek inspiration from blogs and, less frequently, from Facebook. They rarely trust what they find, often approaching it with caution. Credibility does not necessarily correlate with attractiveness.