7.04.2025 Media industry
Can a Robot Be Good Boss? Researchers from SWPS Look for Answers
SWPS

The study, published in Cognition, Technology & Work, suggests that while people do follow robots` instructions to some extent, machines still fall short when it comes to authority, motivation, and efficiency. Their findings open a new discussion about the psychology of automation in the workplace.
Robots in Charge. Obedience Without Authority
According to the research team from SWPS, robots can trigger obedience, but not as reliably as human supervisors. Dr. Konrad Maj, head of the HumanTech Center for Social and Technological Innovation, explained that although humanoid machines can function in positions of authority, their psychological impact is weaker. He noted that HR departments and employers must take into account how employees perceive robots, how much they trust them, and how likely they are to resist following their commands.
The study showed that when compared to human supervisors, robots led to lower performance levels. Participants supervised by robots took more time to complete their tasks and achieved worse results. This highlights a key issue: automation in leadership roles doesn’t guarantee better outcomes unless it’s well-aligned with psychological factors.
The researchers emphasized that replacing a human boss with a machine should never be a simple switch. Whether people will accept and follow a robotic supervisor depends on cultural context, organizational norms, and ethical considerations - especially in regard to accountability for decisions made by machines.
What the Study Looked Like
The experiment took place in a lab at SWPS University and involved participants performing a basic yet repetitive task: renaming file extensions on a computer. Subjects were randomly assigned to one of two groups - one supervised by a humanoid robot named Pepper, and the other by a human researcher. If participants hesitated or stopped for more than 10 seconds, the robot or human would verbally prompt them to continue.
Results showed a significant difference in performance:
Supervision Type | Average Time per File | Average Files Renamed |
---|---|---|
Human Supervisor | 23 seconds | 355 files |
Robot Supervisor | 82 seconds | 224 files |
The human-supervised group worked faster and completed more tasks. Researchers concluded that although robots can provide instructions, they do not stimulate productivity or motivation as effectively as human managers.
These outcomes suggest that emotional and social dynamics still play a major role in workplace performance. Robots, despite their technical capabilities, may lack the human connection that encourages engagement and effort.
Between Trust and the Uncanny Valley
The interaction between humans and robots is complex. According to the SWPS team, people tend to trust machines more when they have human-like features - but only to a certain point. When a robot looks almost, but not exactly, like a person, it can trigger discomfort or even fear. This phenomenon is known as the uncanny valley.
Dr. Maj explained that this discomfort arises from conflicting reactions: a machine that looks human but behaves oddly can confuse us both cognitively and emotionally. From an evolutionary standpoint, he said, humans are wired to avoid things that resemble illness or threat. Robots that mimic people but fall short can seem "wrong" in a way that puts us on edge.
At the same time, anthropomorphism - the attribution of human traits to machines - has practical benefits. It makes robots easier to work with. A robot that talks, moves, and reacts like a person feels more intuitive. But as Dr. Maj pointed out, there`s a danger in making robots too human-like. People may start forming emotional bonds, assigning them rights, or expecting reciprocal behavior.
He warned that as robots become more advanced, they may blur the boundaries between tool and companion. In the long term, this could change how humans relate to one another. Hyper-personalized machines, always patient and available, could make human relationships feel messy or disappointing by comparison.
Ethics, Mental Health, and the Future of Work
The psychological and ethical implications of humanoid robots go beyond the workplace. Dr. Maj and his colleagues expressed concern that people who spend significant time interacting with social robots might become more isolated. Although such technologies may help the elderly or lonely feel supported, over-reliance on machines may reduce the quality and frequency of real human contact.
Dr. Maj stressed that humans need human interaction for mental well-being. AI agents and robots may offer convenience, but they cannot replace the richness of interpersonal relationships.
He also warned that robotization of the workplace, if poorly designed, could lower job satisfaction and engagement. While people may obey robotic managers, they don’t necessarily feel motivated by them. This, in turn, can reduce productivity. The researchers argued that any implementation of robotic systems in professional settings should be participatory, backed by evidence, and tailored to the specific needs of the workforce.
They concluded that workplace robotization cannot rely solely on technical functionality. It must also reflect social and psychological realities. Without this consideration, businesses risk introducing systems that look efficient on paper - but reduce performance and morale in practice.
***
source: SWPS University
More about research:
Maj, K., Grzyb, T., Dariusz Doliński, & Franjo, M. (2025). Comparing obedience and efficiency in tedious task performance under human and humanoid robot supervision. Cognition Technology & Work.
https://doi.org/10.1007/s10111-024-00787-1
COMMERCIAL BREAK
New articles in section Media industry
Trust in social media. Youtube beats TikTok and X
Krzysztof Fiedorek
Do we really trust social media? A new study reveals major differences in how top platforms are rated. Trust goes where there's authenticity, not just algorithms. The role of people is growing while brand influence is fading.
Artificial intelligence in newsrooms. Three realities of the AI era in media
Krzysztof Fiedorek
According to a report by the European Broadcasting Union, many newsrooms already use AI but still do not fully trust it. Audiences do not want "robotic" news, and the technologies themselves though fast can be costly, unreliable, and surprisingly human in their mistakes.
Zero-click search 2025. The even bigger end of clicking in search engines
Bartłomiej Dwornik
Google is giving up its role as a web signpost. More and more, it wants to be the destination of the whole journey. ChatGPT and Perplexity are hot on its heels, changing the rules of the search game. AI Overviews is a card from the same deck. Only content creators are losing ground in this race.
See articles on a similar topic:
Repression Against Media: Committee to Protect Journalists Report for 2024
Krzysztof Fiedorek
In 2024, at least 361 journalists worldwide were imprisoned, often for exposing the truth. In China, reporters are tracked using advanced facial recognition systems, in Israel, Palestinian journalists are jailed without trial, and in Myanmar, journalist Shin Daewe received a life sentence for... a drone.
Social Media in 2025. Generational Differences Are Crystal Clear
KFi
More and more people are saying they’re cutting back on time spent on social media. And while this doesn’t mean a mass exodus, the trend is clear. According to latest GWI report, 31% of users said they had reduced their social media use. There’s also a subtle frustration.
Work in the media. We have more recruitment offers at Reporterzy.info [LINK]
AUTOPROMOCJA Reporterzy.info
Thanks to cooperation with the recruitment website Talent.com, the database of recruitment advertisements published in Reporterzy.info has been significantly enriched. We invite you to browse current job offers and internships in the media and advertising industry from the largest Polish cities.
How do we assess news credibility? Data analysis from 40 countries
Krzysztof Fiedorek
Are people defenseless against false information? Do they really fall for clickbait and fake news? A meta-analysis of 67 studies involving 200,000 people shows the problem is different than we thought. Instead of excessive gullibility, we are dealing with the opposite.