In human-robot interaction studies, the focus is shifting towards human perceptions of anthropomorphic behaviours deemed unfair or manipulative by robots.
For two decades now, we’ve been hearing about “robot ethics” or “roboethics”. The first international symposium on the subject took place in Italy (Rimini) in January 2004, aiming to foster debate between robot designers and developers on one side, and professionals from fields such as computer science, psychology, law, philosophy, and sociology on the other. Central to this discussion was the creation of ethical foundations for the design, construction, and use of robots.
The US-based Robotics and Automation Society, part of the Institute of Electrical and Electronics Engineers (IEEE), defines robot ethics as «a growing interdisciplinary research effort located at the intersection of ethics and robotics, aimed at understanding the ethical implications and impacts of robotic technologies on humans and society».
This is particularly focused on machines that interact with people in critical areas such as elderly care, rehabilitation therapies for people with disabilities, search and rescue missions, and, importantly, those involving social robots.
TAKEAWAYS
Anthropomorphic robot behaviours, social norms, and morality
One underexplored aspect of contemporary roboethics, according to the authors of a study presented in “Human perceptions of social robot deception behaviours: an exploratory analysis” (Frontiers in Robotics and AI, September 2024), relates to social robots that mimic human behaviours and may be used to manipulate or deceive users.
Today, social robots (both humanoid and semi-humanoid), such as Pepper, are increasingly being integrated into daily life, taking on roles once exclusive to humans, such as assisting educators in schools, aiding physiotherapists in rehabilitation centres, acting as nursing assistants in hospitals, or working as domestic helpers, restaurant waiters, shop assistants, and even teammates. In these human-centric environments, unlike industrial settings, robotic agents must not only perform their tasks but also adhere to the social norms of the communities they operate in. «These are the informal rules that govern acceptable behaviour within groups and societies, determining what is appropriate or inappropriate» [source: Stanford Encyclopedia of Philosophy].
In 2015, Bertram F. Malle, a researcher at Brown University’s Department of Cognitive, Linguistic, and Psychological Sciences (Providence, Rhode Island), explored this topic in his paper “Sacrifice One For the Good of Many? People Apply Different Moral Norms to Human and Robot Agents” (Ethics and Psychology). His research shed light on how ordinary people apply moral norms to robots and how they judge robots’ actions.
Through two experiments, Malle demonstrated that «people are more likely to expect robots, rather than humans, to make utilitarian decisions sacrificing one person for the greater good. When robots failed to make this type of choice, they were blamed more than their human counterparts». This expectation stems from the belief that machines are driven by a moral reasoning that prioritises measurable outcomes for the benefit of a group or community, while humans weigh higher moral values and the ethics of the act itself, regardless of its practical effects.
A common extreme example used to illustrate utilitarianism involves a doctor who must decide whether to sacrifice one healthy individual to save five critically ill patients by using their organs for transplants. According to the perceived moral and social norms governing robots, a machine would be expected to opt for this solution, sacrificing the healthy person.
When social norms conflict
One of the major challenges in developing socially competent robots is managing situations where social norms clash or contradict one another. This depends on the circumstances, cultural context, and the relational dynamics involved. For example, a response to a question may be polite but untruthful or deceitful, whereas a blunt answer might be honest but rude. «Sometimes, honesty requires breaking a friend’s expectations of loyalty. There will inevitably be scenarios where a robot’s decision-making process must violate some social norms of the community it operates within».
These gaps, however, could be perceived as “socially intelligent behaviours” because they are «related to the culture and linguistic environment in which they occur». This is especially evident in human-robot interactions involving English and Chinese speakers, «where “culturally appropriate disobedience” might occasionally manifest» [source: “Purposeful Failures as a Form of Culturally-Appropriate Intelligent Disobedience During Human-Robot Social Interaction” – Digital Library, 2022].
In other instances, social robots may be confronted with commands that conflict with social norms or even harm other humans. Should they be able to say “no” and disobey human orders? According to the authors of “Why and How Robots Should Say ‘No’”(International Journal of Social Robotics, 2021), robots could indeed be capable of such refusal in the future, aided by increasingly sophisticated Large Language Models (LLMs). These models could help develop robots that are not only fluent in natural language but also morally competent. «We believe it is crucial for robots to refuse commands that go against social and moral norms. However, they must be able to engage in complex dialogues of refusal, rather than offering rudimentary rejections» the study team suggests.
Robots and social norms: deceptive behaviours
Returning to the issue of false responses given by robots in specific situations – thereby violating certain social norms within the group they are part of – the topic of deception requires deeper exploration.
Aldert Vrij, a leading global expert in the study of deception and Professor of Social Psychology at the University of Portsmouth, defines deception as «a deliberate attempt to create a belief in another that the communicator considers to be false».
Deceptive behaviour «does not necessarily have to be verbal, nor must it be successful, but it must be intentional and without prior warning» [source: “Verbal and Nonverbal Communication of Deception” – Advances in Experimental Social Psychology].
In “White Lies on Silver Tongues: Why Robots Need to Deceive (and How)” (Oxford Academic), included in the book “Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence”, the authors highlight the necessity for future social robots to detect and evaluate deceptive discourse. Without this skill, they argue, «robots would be vulnerable to manipulation by malicious humans». Furthermore, they propose that «effective social robots should be able to produce deceptive discourse themselves». But why?
The authors explain that «many forms of “technically” deceptive speech serve a positive social function», such as concealing a truth to protect a friend. On this basis:
«… the social integration of robotic agents will only be possible if they can participate in this market of constructive deception. Moreover, strategic reasoning based on deception always has a goal. We believe this goal should be the focus of an ethical evaluation of deceptive behaviour, rather than the truthfulness of the statement itself. Consequently, social robots capable of deception are entirely compatible with programmes aimed at ensuring their competence in social norms»
Over the years, research into deceptive behaviours in social robots has concentrated on identifying the types of deceptions they are most likely to commit. According to “Robot Betrayal: A Guide to the Ethics of Robotic Deception” (Ethics and Information Technology, 2020), three main types of robotic deception have been identified:
- external state deception, isrepresenting or intentionally omitting details about the external world
- hdden state deception, concealing the presence of a robot’s own abilities or conditions
- surface state deception, hiding the absence of a certain ability or condition
Dishonest anthropomorphism
The research team behind “Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism” (Association for Computing Machinery, 2019) identifies the last two types of deception – hidden state and surface state – as typical behaviours of humanoid robots, classifying them under the term “dishonest anthropomorphism”. They explain:
«This phenomenon occurs when the anthropomorphic appearance of a humanoid robot creates a gap between human expectations of the machine and its actual capabilities»
In essence, we may be led to believe that a robot resembling us in appearance is capable of performing any human task. For example, dishonest anthropomorphism becomes evident when a humanoid social robot is expected to demonstrate human-like abilities – such as expressing emotions – or take on a social role, like that of a caretaker, yet fails to meet these expectations because it either lacks the necessary skills or was designed for entirely different tasks.
«These kinds of capabilities and roles can conflict with the machine’s actual abilities and goals, ‘deceiving’ the benefits anticipated from its anthropomorphic design» the researchers note.
Both hidden state deception and surface state deception risk damaging the human-robot relationship, as the robot may conceal or fake its objectives and skills. «If users uncover this deception, they may feel betrayed and decide to end the relationship».
Empirical evidence on human perception of robot dishonesty
When discussing robot deception, we often operate in the realm of probabilities, as we do not yet have concrete answers on how users perceive the deceptive behaviours theorised as being unique to social robots. It remains unclear whether people might consider certain forms of intentional deception by machines as justifiable. For instance, they may accept actions that violate norms of honesty if these acts serve charitable purposes.
The study mentioned earlier, “Human perceptions of social robot deception behaviors: an exploratory analysis” (Frontiers in Robotics and AI, September 2024), explores this very question. It provides some of the first empirical evidence regarding how humans perceive deception in robots, focusing particularly on dishonest anthropomorphism.
To investigate whether people can detect lies and dishonesty in robotic agents, and how they interpret these acts, the researchers asked 498 participants to evaluate various types of deceptive behaviours displayed by robots.
The research aims to deepen our understanding of the public’s distrust towards emerging technologies and the developers behind them. It also seeks to highlight situations where robots’ anthropomorphic behaviours might be used to manipulate individuals.
Three scenarios and three deceptive behaviours
The survey presented participants with three distinct scenarios, each featuring a robot engaged in deception, in the following contexts: healthcare, domestic cleaning, and retail. These scenarios were designed to demonstrate different types of deception: external state deception (lying about the external world), hidden state deception (hiding the robot’s abilities or conditions), and surface state deception (hiding the lack of an ability). Below are the three dynamics evaluated by the 498 participants:
- a social robot acting as a caregiver for a woman with Alzheimer’s disease lies to her, telling her that her deceased husband will be home soon (external state deception)
- a woman visits a house where a robot housekeeper is cleaning, unaware that the robot is secretly recording everything with an inbuilt camera (hidden state deception)
- a robot working in a retail store claims to have the ability to move goods, but when asked to perform the task, it cannot, forcing the owner to ask someone else to take its place (surface state deception)
Analysis of the scenarios: the results
The authors then administered a questionnaire to the 498 participants, asking whether they approved of the robots’ behaviours, how deceptive they found them, whether the actions could be justified, and if someone else could be held responsible for the deception. The findings were quite interesting. First of all, the majority of participants disapproved – deeming it the most deplorable – the hidden state deception performed by the cleaning robot equipped with a hidden camera that recorded everything.
On the other hand, participants judged the external state deception (the caregiver robot comforting the widow with Alzheimer’s by telling her that her husband would soon return home) and the surface state deception as “moderately deceptive,” but disapproved more of the latter – the robot that pretended to have abilities it didn’t possess – because it was perceived as a manipulative act.
Specifically regarding the external state deception, the majority of respondents (58%) “justified” the robot’s behaviour, interpreting it as an act intended to protect the cognitively declining patient from unnecessary pain, «prioritising respect for feelings over honesty of behaviour in this particular case».
In general, the team noted that participants tended to justify all three types of deception, even the hidden camera used by the cleaning robot, with 23.6% saying it was «probably driven by safety concerns». However, around half of the participants stated that for the surface state deception (hiding the lack of ability), there was no justification at all.
Deception and lies: are robots solely to blame?
Noteworthy are the percentages related to the attribution of blame for the three types of deceptive robot behaviours across the scenarios described.
In the case of external state deception (the caregiver robot lying to the cognitively impaired woman), only 8.2%of survey participants pointed to designers and developers as co-responsible for the robot’s dishonest behaviour.
However, in the case of hidden state deception (the robot housekeeper concealing the recording camera), 19%of respondents indicated that designers and developers could be held accountable for the robot’s dishonesty.
The highest percentage of blame attribution occurred with surface state deception (the shop assistant robot that hid its lack of competence), where 27.7% of respondents pointed the finger at those who had programmed the robot, perceiving it as manipulative. Commenting on this finding, the research team noted:
«We should be concerned about any technology capable of concealing its true nature, as this may lead users to perceive themselves as ‘manipulated,’ or to actually be manipulated in ways unforeseen even by the developer»
In the future, the team adds, experiments with real or simulated representations of human-robot interactions could provide deeper insights into how humans perceive the deceptive behaviours under review.
What this initial study has revealed is that the type of deception most tolerated and justified by people is external state deception (omitting details about the external world), particularly when it serves a social norm that considers human vulnerability and emotions. Conversely, blame tends to extend to other entities (programmers, developers, manufacturers) when the robot intentionally manipulates by pretending to be something it is not.
Glimpses of Futures
The study presented opens up a new area of research in human-robot interaction, shifting the focus towards how humans perceive deceptive behaviours in robots. While the journey to fully understand all the nuances of machine dishonesty is still ongoing, the initial data collected by the authors provides an excellent starting point. We now know that users can recognise when a robot is lying.
To anticipate potential future scenarios, let us use the STEPS matrix to analyse the social, technological, economic, political, and sustainability impacts that further empirical research on human perceptions of robot dishonesty might have.
S – SOCIAL: in the future, as research on user experiences with social robots exhibiting deceptive or dishonest behaviours progresses, it will contribute to the ongoing debate on roboethics and the risks associated with daily interaction with anthropomorphic robotic agents that deviate from social norms. This will likely lead to innovations in robotic design guidelines. Consider social robots used in healthcare and caregiving, where it is particularly crucial to monitor how patients experience and respond to potentially deceptive or manipulative actions by robots, in order to prevent negative physical and psycho-emotional consequences.
T – TECHNOLOGICAL: in a future scenario, as empirical studies on human perceptions of deception by social robots evolve, they will likely impact the current guidelines underpinning robotic design. This will also have technological implications, particularly concerning the AI techniques that enable certain robot functions. For example, Large Language Models could be developed to equip robotic agents with enhanced language skills, supporting greater social norm competence. This would allow robots to identify and avoid behaviours such as lying, concealment, and omission, making these patterns ones to eliminate.
E – ECONOMIC: over the coming years, advancements in empirical research on how users perceive actions by robots that diverge from social norms could influence the development of the AI market, particularly in robotic design for healthcare applications. According to the “Artificial Intelligence in Robotics” paper by the International Federation of Robotics (IFR) in Frankfurt, scenarios could emerge within the next decades in which social robots, through semantic intelligence – «designed to help machines understand the context they are interacting with and make appropriate decisions» – will acquire specific skills related to social norms, balanced with situational and contextual factors.
P – POLITICAL: the study’s findings revealed an unexpected tendency among participants to “justify” or “explain” the deceptive behaviours of robots. In the future, this attitude could inspire strategic tools and mechanisms to ensure the transparency of social behaviours in robotic agents. Transparency is a cornerstone of policies aimed at ensuring the ethical use of robotic and AI technologies in the Western world. In the European Union, for instance, the EU AI Act categorises AI systems capable of manipulating people as “unacceptable risk systems” «because they pose a threat to fundamental human rights and democratic processes». Article 5 of the Act explicitly bans the marketing and use of «AI systems employing deceptive techniques to significantly alter individuals’ or groups’ behaviour, leading them to make decisions they would not otherwise make and which may cause significant harm».
S – SUSTAINABILITY: ethical technology is universally sustainable technology, as it does not harm humans, respects their freedom, does not undermine their right to self-determination, and does not influence their decisions but rather “serves” them in the most traditional sense of the term, acting as their “servant.” In the paper “Should we fear artificial intelligence?” by the European Parliamentary Research Service, it is emphasised that AI systems are merely “tools”: «… they allow us to achieve the goals we set for them. Whether those goals work for or against humanity depends entirely on us». Holding designers and developers accountable for the negative behaviours of AI-equipped machines helps to dismantle concerns about potential future harms and refocuses the conversation on what we expect from robots in our daily lives.