The Ethical Significance of Human Likeness in Robotics and AI

Main Article Content

Peter Remmers

Abstract

A defining goal of research in AI and robotics is to build technical artefacts as substitutes, assistants or enhancements of human action and decision-making. But both in reflection on these technologies and in interaction with the respective technical artefacts, we sometimes encounter certain kinds of human likenesses. To clarify their significance, three aspects are highlighted. First, I will broadly investigate some relations between humans and artificial agents by recalling certain points from the debates on Strong AI, on Turing’s Test, on the concept of autonomy and on anthropomorphism in human-machine interaction. Second, I will argue for the claim that there are no serious ethical issues involved in the theoretical aspects of technological human likeness. Third, I will suggest that although human likeness may not be ethically significant on the philosophical and conceptual levels, strategies to use anthropomorphism in the technological design of human-machine collaborations are ethically significant, because artificial agents are specifically designed to be treated in ways we usually treat humans.

Downloads

Download data is not yet available.

Article Details

Section
Core topics-related articles
Author Biography

Peter Remmers, Technical University of Berlin

Peter Remmers is Research Assistant since 2017 in the supporting project "Autonomous Robots for Assistance: Basic Interactive Skills (ARAIG) (sub-project "Ethical and Legal Aspects of Service Robots", TU Berlin). Ph.D. in at the Technical University of Berlin (2017) with a dissertation about "Film as a form of knowledge". 2009-2014 Teaching Assistant at the Department of Philosophy at TU Berlin. Research focuses: philosophy and ethics of technology, epistemology, philosophy of film.

References

  1. Beer J. M., Fisk A. D., & Rogers W. A. 2014. ”Toward a Framework for Levels of Robot Autonomy in Human-Robot Interaction.” Journal of Human-Robot Interaction 3(2):74-99.
  2. Bekey G. A. 2012. “Current Trends in Robotics: Technology and Ethics.” In: Robot Ethics. The Ethical and Social Implications of Robotics (pp. 17-34). Cambridge, Mass. – London: MIT Press.
  3. Chandrasekaran B. & Conrad J. M. 2015. “Human-Robot Collaboration: A Survey.” SoutheastCon:1-8.
  4. Christaller T. (Ed.) 2003. Autonome Maschinen. Wiesbaden: Westdeutscher Verlag.
  5. Coeckelbergh M. 2011. “Artificial Companions: Empathy and Vulnerability Mirroring in Human-Robot Relations.” Studies in Ethics, Law, and Technology 4(3):1-17.
  6. Coeckelbergh M. 2014. “The Moral Standing of Machines: Towards a Relational and Non-Cartesian Moral Hermeneutics.” Philosophy & Technology 27(1):61-77.
  7. Darling K. 2016. “Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior towards Robotic Objects.”
  8. Calo, Froomkin, Kerr (Eds.), We Robot Conference 2012, University of Miami. Edward Elgar.
  9. Dennett D. C. 1987. The Intentional Stance. Cambridge, Mass.: MIT Press.
  10. Dretske F. 1994. “If You Can’t Make One, You Don’t Know How It Works.” Midwest Studies in Philosophy 19(1):468-82.
  11. Gunkel D. J. 2012. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge, Mass.: MIT Press.
  12. Johnson D. G. & Verdicchio M. 2018. “Why Robots Should Not Be Treated Like Animals.” Ethics and Information Technology 20(4):291-301.
  13. Johnson D. G. & Noorman M. 2014. “Artefactual Agency and Artefactual Moral Agency.” In: P. Kroes & P.-P. Verbeek (Eds.), The Moral Status of Technical Artefacts (pp. 143- 158). Dordrecht: Springer.
  14. Lanier J. 2010. You Are Not a Gadget: A Manifesto (1st ed). New York, NY: Knopf.
  15. Miller K. D. 2015. “Will You Ever Be Able to Upload Your Brain?.” New York Times, Oct. 10, 2015 (retrieved from https://nyti.ms/1VLghZ4).
  16. Müller M. F. 2014. “Von vermenschlichten Maschinen und maschinisierten Menschen.“ In: S. Brändli, R. Harasgama, R. Schister, & A. Tamò (Eds.), Mensch und Maschine— Symbiose oder Parasitismus? (pp. 125-142). Bern: Stämpfli.
  17. Onnasch L., Maier X., & Jürgensohn T. 2016. „Mensch-Roboter-Interaktion—Eine Taxonomie für alle Anwendungsfälle.“ baua: Fokus, Bundesanstalt für Arbeitsschutz und Arbeitsmedizin.
  18. Remmers P. 2020. “Would Moral Machines close the responsibility gap? Reflections on Autonomous Artificial Agents.” In: B. Beck & M. Kühler (Eds.), Technology, Anthropology, and Dimensions of Responsibility. Stuttgart: J. B. Metzler (in press).
  19. Russell S. J. & Norvig P. 2016. Artificial Intelligence: A Modern Approach (3rd edition). London: Pearson Education.
  20. Searle J. R. 1980. “Minds, Brains, and Program.” Behavioral and Brain Sciences 3(3):417- 24.
  21. Terveen L. G. 1995. “Overview of Human-Computer Collaboration.” Knowledge-Based Systems 8(2-3):67-81.
  22. Turing A. 2004. “Computing Machinery and Intelligence.” In: B. J. Copeland (Ed.), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life, Plus the Secrets of Enigma. Oxford: Oxford University Press.
  23. Weizenbaum J. 1976. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: Freeman.
  24. Yanco H. A. & Drury J. 2004. “Classifying Human-Robot Interaction: An Updated Taxonomy.” 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No. 04CH37583) 3:2841–2846.