Robot Morality: Bertram F. Malle’s Concept of Moral Competence

Main Article Content

André Schmiljun

Abstract

Bertram F. Malle is one of the first scientists, combining robotics with moral competence. His theory outlines that moral competence can be understood as a system of five components including moral norms, a moral vocabulary, moral cognition, moral decision making and moral communication. Giving a brief (1) introduction of robot morality, the essay analyses Malle’s concept of moral competence (2) and discusses its consequences (3) for the future of robot science. The thesis will further argue that Malle’s approach is insufficient due to three reasons: his function argument is very simplifying and therefore troubling; each component of his theory is inconsistent and, finally, closely connected to our common understanding of personhood, which raises new philosophical questions surrounding the basic issue of if and/or when machines can be considered people.

Downloads

Download data is not yet available.

Article Details

How to Cite
Schmiljun, A. (2018). Robot Morality: Bertram F. Malle’s Concept of Moral Competence. ETHICS IN PROGRESS, 8(2), 69-79. https://doi.org/10.14746/eip.2017.2.6
Section
Core topics-related articles
Author Biography

André Schmiljun, Humboldt University in Berlin

André Schmiljun - Ph.D. in Philosophy. His research interest covers robot ethics, German Idealisms and philosophy of mind. In his doctoral thesis (under supervision of Christian Möckel and Steffen Dietzsch) he analysed the phenomenon of antipolitcs in the work of Friedrich W. J. Schelling (1775-1854). Since 2017 he has been working on his habilitation at Adam Mickiewicz University in Poznań (under supervision of Prof. Dr. Ewa Nowak) concerning the possibility of moral competence in Artificial Intelligence. For this project, he received a German Academic Exchange Service (DAAD) scholarship in 2019. Contact: schmiljun@insystems.de

References

  1. Abney K. & Veruggio G. 2014. "Roboethics: The Applied Ethics for a new Science," in P. Lin (Ed.), Robot Ethics (Intelligent Robotics and Autonomous Agents). Cambridge: MIT University Press (347-364).
  2. Beckermann A. 2001. Analytische Einführung in die Philosophie des Geistes. Berlin: Walter de Gruyter.
  3. Bendel O. 2013. "Wie viel Moral muss eine Maschine haben?" Liewo. Retrieved from http://blog.zdf.de/hyperland, on August 12, 2017).
  4. Elster J. 1989. The Cement of Society: A Study of Social Order. New York, NY: Cambridge University Press.
  5. Frankfurt H. G. 1971. "Freedom of the Will and the Concept of a Person." Journal of Philosophy 68 (1): 5-20.
  6. Johansson L. 2013. Autonomous Systems in Society and War: Philosophical Inquiries. Stockholm: KTH Royal Institute of Technology.
  7. Johansson L. 2013. "Robots and the Ethics of Care." International Journal of Technology 4 (1):67-82.
  8. Kahneman D. 2011. Schnelles Denken, Langsames Denken. München: Pantheon.
  9. Kohlberg L. 1964. "Development of Moral Character and Moral Ideology," in M. L. Hoffman & L. W. Hoffman (Eds.), Review of Child Development Research, Vol. I. New York: Russel Sage Foundation (381-431).
  10. Lin P. 2014. Robot Ethics (Intelligent Robotics and Autonomous Agents). Cambridge: MIT University Press.
  11. Lind G. 2016. How to Teach Morality? Berlin: Logos Verlag.
  12. Loh J. 2017. "Roboterethik. Über eine noch junge Bereichsethik." Information Philosophie 1: 20-33.
  13. Malle B. F. 2014. "Moral Competence in Robots?," in J. Seibt, R. Hakli, & M. Nørskov (Eds.), Sociable Robots And the Future of Social Relations: Proceedings of Robo-Philosophy. Series: Frontiers in Artificial Intelligence and Applications. Ios Pr Inc 273 (189-198).
  14. Malle B. F. 2015. "Integrating Robot Ethics and Machine Morality: The Study and Design of Moral Competence in Robots." Ethics and Information Technology 18 (4):243-256. doi: 10.1007/s10676-015-9367-8.
  15. Malle B. F., Guglielmo S., & Monroe A. E. 2014. "Moral, Cognitive, and Social: A Theory of Blame." Psychological Inquiry 25 (1):147-186.
  16. Roth, G. 2007. Persönlichkeit, Entscheidung und Verhalten. Warum es so schwierig ist, sich und andere zu ändern. Stuttgart: Klett-Cotta.
  17. Searle J. R. 1998. Geist, Sprache und Gesellschaft. Frankfurt/Main: Suhrkamp.
  18. Sparrow R. 2014. "Can Machines Be People? Reflections on the Turing Triage Test," in P. Lin (Ed.), Robot Ethics... (301-316).
  19. Sturma D. 1997. Philosophie der Person. Die Selbstverhältnisse von Subjektivität und Moralität. Paderborn – München – Wien – Zürich: Mentis.
  20. Sturma D. 2015. "Person sucht Person," in T. Buchheim & F. Hermanni (Hrsg.), Alle Persönlichkeit ruht auf einem dunkeln Grunde. München: De Gruyter.
  21. Winston M. 2008. "Moral Patients." Retrieved from http://ethicsofglobalresponsibility.blogspot.de/2008/02/moral-patients.html on August 13, 2017 (no pages).
  22. Wright J. C. & Bartsch K. 2008. "Portraits of Early Moral Sensibility in Two Children’s Everyday Conversation." Merrill-Palmer Quarterly 54 (1): Article 4.