Expectations towards the Morality of Robots: An Overview of Empirical Studies

Main Article Content

Aleksandra Wasielewska


The main objective of this paper is to discuss people’s expectations towards social robots’ moral attitudes. Conclusions are based on the results of three selected empirical studies which used stories of robots (and humans) acting in hypothetical scenarios to assess the moral acceptance of their attitudes. The analysis indicates both the differences and similarities in expectations towards the robot and human attitudes. Decisions to remove someone’s autonomy are less acceptable from robots than from humans. In certain circumstances, the protection of a human’s life is considered more morally right than the protection of the robot’s being. Robots are also more strongly expected to make utilitarian choices than human agents. However, there are situations in which people make consequentialist moral judgements when evaluating both the human and the robot decisions. Both robots and humans receive a similar overall amount of blame. Furthermore, it can be concluded that robots should protect their existence and obey people, but in some situations, they should be able to hurt a human being. Differences in results can be partially explained by the character of experimental tasks. The present findings might be of considerable use in implementing morality into robots and also in the legal evaluation of their behaviours and attitudes.


Download data is not yet available.

Article Details

How to Cite
Wasielewska, A. (2021). Expectations towards the Morality of Robots: An Overview of Empirical Studies. ETHICS IN PROGRESS, 12(1), 134-151. https://doi.org/10.14746/eip.2021.1.10
Core topics-related articles


  1. Anderson S. 2008. “Asimov’s Three Laws of Robotics and Machine Metaethics,” AI & Society 22:477–493.
  2. Asimov I. 1981. “The Three Laws,” Compute! 11(18):18.
  3. Awad E., Dsouza S., Kim R., Schulz J., Henrich J., Shariff A., Bonnefon J.-F., & Rahwan I. 2018. “The Moral Machine Experiment,” Nature 563(7729):59–64.
  4. Fong T., Nourbakhsh I., & Dautenhahn K. 2003. “A Survey of Socially Interactive Robots,” Robotics and Autonomous Systems 42(3/4):143–166.
  5. Giger J.-C., Moura D., Almeida N., & Piçarra N. 2017. “Attitudes Towards Social Robots: The Role of Gender, Belief in Human Nature Uniqueness, Religiousness and Interest in Science Fiction,” in S. N. de Jesus & P. Pinto (Eds), Proceedings of II International Congress on Interdisciplinarity in Social and Human Sciences, Vol. 11 (p. 509). Research Centre for Spatial and Organizational Dynamics, University of Algarve Faro, Portugal.
  6. Hagendorff T. 2020. “The Ethics of AI Ethics: An Evaluation of Guidelines,” Minds and Machines 30:99–120.
  7. Hoffmann C. H. & Hahn B. 2019. “Decentered Ethics in the Machine Era and Guidance for AI Regulation,” AI & Society 35:1–10.
  8. Jarmakowski-Kostrzanowski T. & Jarmakowska-Kostrzanowska L. 2016. “Polska adaptacja kwestionariusza kodów moralnych (MFQ-PL),” Psychologia Społeczna 11:489–508.
  9. Laakasuo M., Kunnari A., Palomäki J., Rauhala S., Koverola M., Lehtonen N., Halonen J., Repo M., Visala A., & Drosinou M. 2019. “Moral Psychology of Nursing Robots – Humans Dislike Violations of Patient Autonomy But Like Robots Disobeying Orders.” URL: https://psyarxiv.com (retrieved on July 18, 2020).
  10. Ljungblad S., Nylander S., & Nørgaard M. 2011 (March). “Beyond Speculative Ethics in HRI? Ethical Considerations and the Relation to Empirical Data,” Proceedings of the 6th International Conference on Human–Robot Interaction (HRI) (pp. 191–192). IEEE.
  11. Malle B. F., Scheutz M., Arnold T., Voiklis J., & Cusimano C. 2015 (March). “Sacrifice One for the Good of Many? People Apply Different Moral Norms to Human and Robot Agents,” 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 117–124). IEEE.
  12. Malle B. F. & Thapa Magar S. 2017 (March). “What Kind of Mind Do I Want in My Robot? Developing a Measure of Desired Mental Capacities in Social Robots,” Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human–Robot Interaction (HRI) (pp. 195–196). IEEE. MoralFoundations.org 2016. “Moral Foundations Theory.” URL: https://www.moralfoundations.org/ (retrieved on January 5, 2019).
  13. Murphy R. & Woods D. D. 2009. “Beyond Asimov: The Three Laws of Responsible Robotics,” IEEE Intelligent Systems 24(4):14–20.
  14. Sullins J. P. 2006. “When Is a Robot a Moral Agent,” Machine Ethics 6:23–30.
  15. Voiklis J., Kim B., Cusimano C., & Malle B. F. 2016 (August). “Moral Judgments of Human vs Robot Agents,” 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 775–780). IEEE.