Why Can´t We Regard Robots As People?

Main Article Content

André Schmiljun


With the development of autonomous robots, one day probably capable of speaking, thinking and learning, self-reflecting, sharing emotions, in fact, with the raise of robots becoming artificial moral agents (AMAs) robot scientists like Abney, Veruggio and Petersen are already optimistic that sooner or later we need to call those robots “people” or rather “Artificial People” (AP). The paper rejects this forecast, due to its argument based on three metaphysical conflicting assumptions. Firstly, it is the idea that it is possible to precisely define persons and apply the definition to robots or use it to differentiate human beings from robots. Further, the argument of APs favors a position of non-reductive physicalism (second assumption) and materialism (third assumption), finally producing weird convictions about future robotics. Therefore, I will suggest to follow Christine Korsgaard’s defence of animals as ends in themselves with moral standing. I will show that her argument can be transmitted to robots, too, at least to robots which are capable of pursuing their own good (even if they are not rational). Korsgaard’s interpretation of Kant delivers an option that allows us to leave out complicated metaphysical notions like “person” or “subject” in the debate, without denying robots’ status as agents.


Download data is not yet available.

Article Details

How to Cite
Schmiljun, A. (2018). Why Can´t We Regard Robots As People?. ETHICS IN PROGRESS, 9(1), 44-61. https://doi.org/10.14746/eip.2018.1.3
Core topics-related articles
Author Biography

André Schmiljun, Humboldt-Universität zu Berlin

André Schmiljun, PhD in Philosophy, studied History and Philosophy at Humboldt University Berlin. In his doctoral thesis (under supervision of Prof. Dr. Ch. Möckel and Prof. Dr. S. Dietzsch) he analysed the phenomena of antipolitcs in the work of Friedrich W. J. Schelling (1775-1854). Since 2017, he works on his habilitation at Adam Mickiewicz University in Poznań (under supervision of Prof. Dr. Ewa Nowak) concerning the possibility of moral competence in Artificial Intelligence.


  1. Abney K., Veruggio G. 2014. “Roboethics: The Applied Ethics for a New Science.” In Lin P., Leith A. & Bekey G. A. (Eds.), Robot Ethics. The Ethical and Social Implications of Robotics. Cambridge: MIT University Press (347-64).
  2. Abney K. 2014. “Robotics, Ethical Theory, and Metaethics: A Guide for the Perplexed.” In Lin P., Leith A. & Bekey G. A. (Eds.), Robot Ethics. The Ethical and Social Implications of Robotics. Cambridge: MIT University Press (35-52).
  3. Beckermann, A. 2001. Analytische Einführung in die Philosophie des Geistes. Berlin: Walter de Gruyter.
  4. Bringsjord S. 1992. What Robots Can and Can’t Be? Luxemburg: Springer Science+Business Media S.A.
  5. Broad C. D. 1925. The Mind and Its Place in Nature. New York: Harcourt, Brace & Company, Inc.
  6. Davidson D. 2005. “Rationale Lebewesen.” In Wild M. & Perler D. (Eds.), Der Geist der Tiere. Frankfurt am Main: Suhrkamp (117-31).
  7. Dennett D. C. 2005. „Das Bewusstsein der Tiere: Was ist wichtig und warum?“ In Wild M. & Perler D. (Eds.), Der Geist der Tiere. Frankfurt am Main: Suhrkamp (389-407).
  8. Dietrich E. 2018. “Homo Sapiens 2.0. Building the Better Robots of Our Nature.” In Anderson M. & Anderson S. L. (Eds.), Machine Ethics. Cambridge: University Press (531-37). 59
  9. Dretske F. 2005. „Minimale Rationalität.“ In Wild M. & Perler D. (Eds.), Der Geist der Tiere. Frankfurt am Main: Suhrkamp (213-222).
  10. Dretske F. 2001. „Animal Minds.“ Philosophic Exchange 31(1):21-33.
  11. Gabriel M. Warum es die Welt nicht gibt? Berlin: Ullstein.
  12. Hambling D. 2018. We: Robots. The Robots That Already Rule the World. London: Quarto Publishing.
  13. Kahneman D. 2011. Schnelles Denken, Langsames Denken. München: Pantheon.
  14. Kallestrup J. 2006. “The Causal Exclusion Argument.” Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition 131(2):459-85.
  15. Kant I. 1785. Groundwork for the Metaphysics of Moral. Ed. and trans. by A. W. Wood. New Haven and London: Yale University Press.
  16. Kim J. 1993. Supervenience and Mind. Cambridge: Cambridge University Press.
  17. Kim J. 1999. “Making Sense of Emergence.” Philosophical Studies 95(1/2):3-36.
  18. Korsgaard C. M. 2012. “A Kantian Case for Animal Rights”. In Michel M., Kühne D., & Hänni J. (Eds.), Animal Laws – Tier und Recht. Developments and Perspectives in the 21st Century. Zürich/ St. Gallen: DIKE (1-28).
  19. Korsgaard C. M. 2009. Self-Constitution. Agency. Identity and Integrity. Oxford and New York: Oxford University Press.
  20. Korsgaard C. M. 2004. “Fellow Creatures: Kantian Ethics and Our Duties to Animals.” Tanner Lectures on Human Values 24:77-110.
  21. Korsgaard C. M. 2018. Fellow Creatures: Our Obligations To the Other Animals. Oxford: University Press.
  22. Lind G. 2016. How to Teach Morality? Berlin: Logos Verlag.
  23. Lin P., Leith A., & Bekey G. A. 2014. Robot Ethics. The Ethical and Social Implications of Robotics. Cambridge: MIT University Press.
  24. Lin P., Jenkis R., & Abney K. 2017 (Eds.). Robot Ethics 2.0. From Autonomous Cars to Artificial Intelligence. Oxford: Oxford University Press.
  25. Locke J. 1894. An Essay Concerning Human Understanding. Ed. by A. Campbell Fraser. 2 vols. Oxford: Clarendon Press.
  26. La Mettrie J. O 1774. Der Mensch eine Maschine (org. L’homme Machine). Berlin: Holzinger.
  27. Nagel T. 1974. “What Is It Like to Be a Bat?” The Philosophical Review 83(4):435-50.
  28. Nagel T. 1986. The View from Nowhere. Oxford – New York – Toronto: Oxford University Press.
  29. Nowak E. 2017. “Can Human and Artificial Agents Share an Autonomy, Categorical Imperative-based Ethics and ‘Moral’ Selfhood?” Filozofia Publiczna I Edukacja Demokratyczna 6(2):169-208. E-access: https://pressto.amu.edu.pl/index.php/fped/article/view/13198/12903 , https://doi.org/10.14746/fped.2017.6.2.20
  30. Petersen S. 2014. “Designing People to Serve”, in Lin P., Leith A., & Bekey G. A. (Eds.), Robot Ethics. The Ethical and Social Implications of Robotics. Cambridge: MIT University Press (283-98).
  31. Scheutz M. & Schermerorn P. 2009. „Affective Goal and Task Selection for Social Robots.” In Handbook of Research and Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence. Hershey: IGI Global (74-87).
  32. Schermerhorn P., Kramer J., Brick T., Anderson D., Dingler A., & Scheutz M. 2006. „Diarc: A Testbed for Natural Human-Robot Interactions.“ Proceedings of AAAI 2006 Robot Workshop.
  33. Searle J. R. 1984. Minds, Brains and Science. Cambridge, MA: Harvard University Press.
  34. Searle J. R. 2005. „Der Geist der Tiere.“ In Wild M. & Perler D. (Eds.), Der Geist der Tiere. Frankfurt am Main: Suhrkamp (132-52).
  35. Schlosser M. E. 2006. “Causal Exclusion and Overdetermination.” In Di Nucci E. & McHugh J. (Eds.), Content, Consciousness and Perception. Cambridge: Cambridge Scholars Press (139-55).
  36. Schmiljun A. 2017a. “Robot Morality. Bertram F. Malle’s Concept of Moral Competence.” Ethics in Progress 8(2):69-79.
  37. Schmiljun A. 2017b. „Symbolische Formen und Sinnfelder. Probleme und Unterschiede eines gemeinsamen Projekts.“ In Hamada Y., Favuzzi P., Klattenhoff T. & Nordsieck V. (Eds.), Symbol und Leben. Grundlinien einer Philosophie der Kultur und Gesellschaft. Berlin: Logos (129-44).
  38. Spaemann R. 2006. Personen. Versuche über den Unterschied zwischen ‚etwas’ und ‘jemand’. Stuttgart: Klett-Cotta.
  39. Stärk J.-P. 2013. Das Leib-Seele-Problem, die Hirnforschung und die exzentrische Positionalität. Hamburg: Diplomica.
  40. Sturma D. 1997. Philosophie der Person. Die Selbstverhältnisse von Subjektivität und Moralität. Paderborn – München – Wien – Zürich: Mentis.
  41. Wallach A. & Allen C. 2009. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press.