Moral Competence and Moral Orientation in Robots

Main Article Content

André Schmiljun

Abstract

Two major strategies (the top-down and bottom-up strategies) are currently discussed in robot ethics for moral integration. I will argue that both strategies are not sufficient. Instead, I agree with Bertram F. Malle and Matthias Scheutz that robots need to be equipped with moral competence if we don’t want them to be a potential risk in society, causing harm, social problems or conflicts. However, I claim that we should not define moral competence merely as a result of different “elements” or “components” we can randomly change. My suggestion is to follow Georg Lind’s dual aspect dual layer theory of moral self that provides a broader perspective and another vocabulary for the discussion in robot ethics. According to Lind, moral competence is only one aspect of moral behavior that we cannot separate from its second aspect: moral orientation. As a result, the thesis of this paper is that integrating morality into robots has to include moral orientation and moral competence.

Downloads

Download data is not yet available.

Article Details

Section
Core topics-related articles
Author Biography

André Schmiljun, Humboldt-Universität zu Berlin

André Schmiljun - Ph.D. in Philosophy. His research interest covers robot ethics, German Idealisms and philosophy of mind. In his doctoral thesis (under supervision of Christian Möckel and Steffen Dietzsch) he analysed the phenomenon of antipolitcs in the work of Friedrich W. J. Schelling (1775-1854). Since 2017 he has been working on his habilitation at Adam Mickiewicz University in Poznań (under supervision of Prof. Dr. Ewa Nowak) concerning the possibility of moral competence in Artificial Intelligence. For this project, he received a German Academic Exchange Service (DAAD) scholarship in 2019.

References

  1. Abney K. 2014. “Robotics, Ethical Theory, and Metaethics: A Guide for the Perplexed.” In: Lin P., Leith A., & Bekey G. A. (Eds.), Robot Ethics. The Ethical and Social Implications of Robotics (pp. 35-52). Cambridge: MIT University Press.
  2. Allen C. 2011. “The Future of Moral Machines.” The New York Times: Opinionator. Retrieved December 29, 2014, from http://opinionator.blogs.nytimes.com/2011/12/25/the-future-of-moral-machines/.
  3. Antaki C. 1994. Explaining and Arguing: The Social Organization of Accounts. London: Sage.
  4. Anderson M. & Anderson S. L. 2018. “General Introduction.” In: M. Anderson & S. L.
  5. Anderson (Eds.), Machine Ethics (pp. 1-4). Cambridge: University Press.
  6. Andreae John H. 1987. “Design of Conscious Robots.” Metascience 5:41-54.
  7. Bringsjord S., Taylor J., Van Heuveln B., Arkoudas K., Clark M., & Wojtowicz R. 2018. “Piagetian Roboethics via Category Theory: Moving beyond Mere Formal Operations to Engineer Robots Whose Decisions Are Guaranteed to Be Ethically Correct.” In: M. Anderson & S. L. Anderson (Eds.), Machine Ethics (pp. 361-374). Cambridge: University Press.
  8. Floridi L. 2018. “On the Morality of Artificial Agents.” In: M. Anderson & S. L. Anderson (Eds.), Machine Ethics (pp. 184-212). Cambridge: University Press.
  9. Gips J. 2018. “Towards the Ethical Robot.” In: M. Anderson & S. L. Anderson (Eds.), Machine Ethics (pp. 244-253). Cambridge: University Press.
  10. Guarini M. 2018. “Computational Neural Modeling and the Philosophy of Ethics: Reflections on the Particularism-Generalism Debate.” In: M. Anderson & S. L.
  11. Anderson (Eds.), Machine Ethics (pp. 316-334). Cambridge: University Press.
  12. Greene J. D., Nystrom L. E., Engell A. D., Darley J. M., & Cohen J. D. 2004. “The Neural Bases of Cognitive Conflict and Control in Moral Judgment.” Neuron 44:389-400
  13. Greene, J. D. 2015. “The Rise of Moral Cognition.” Cognition 135:39-42.
  14. Hall J. Storrs 2018. “Ethics for Machines.” In: M. Anderson & S. L. Anderson (Eds.), Machine Ethics (pp. 28-44). Cambridge: University Press.
  15. Kant I. 1993/17851. Grounding for the Metaphysics of Morals. Trans. by J. W. Ellington (3rd ed.). Indianapolis – Cambridge: Hackett.
  16. Kiefer A. B. 2019. A Defense of Pure Connectionism. Diss. The Graduate Center, City University of New York. DOI: https://doi.org/10.13140/RG.2.2.18476.51842.
  17. Kohlberg L. 1964. “Development of Moral Character and Moral Ideology.” In: M. L. Hofmann & L. W. Hoffmann (Eds.), Review of Child Development Research (pp. 383-432). New York: Russell Sage Foundation.
  18. Korsgaard C. M. 2012. “A Kantian Case for Animal Rights.” In: M. Michel, D. Kühne, & J. Hänni (Eds.), Animal Laws – Tier und Recht. Developments and Perspectives in the 21st Century (pp. 3-27). Zürich – St. Gallen: DIKE.
  19. Leben D. 2019. Ethics for Robots. How do Design a Moral Algorithm. London – New York: Routledge.
  20. Lin P. 2014. “Introduction into Robot Ethics.“ In: P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot Ethics. The Ethical and Social Implications of Robotics (pp. 3-16). Cambridge: MIT University Press.
  21. Lind G. & Wakenhut R. H. 1985. “Testing for Moral Judgment Competence.” In: G. Lind, H. A. Hartmann, & R. Wakenhut (Eds.), Moral Judgments and Social Education (pp. 79-105). New Brunswick –London: Transaction Publishers.
  22. Lind, G. 1985. Inhalt und Struktur des moralischen Urteilens. Theoretische, methodologische und empirische Untersuchungen zur Urteils- und Demkratiekompetenz bei Studierenden. Diss. Konstanz: Universitätsdruck.
  23. Lind G. 2016. How To Teach Morality. Promoting Deliberation and Discussion, Reducing Violence and Deceit. Berlin: Logos Verlag.
  24. Malle B. F. 2014. “Moral Competence in Robots?.” In: J. Seibt, R. Hakli, & M. Nørskov (Eds.), Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy (pp. 189-198). Amsterdam, Netherlands: IOS Press. DOI: https://doi.org/10.3233/978-1-61499-480-0-189.
  25. Malle B. F. 2016. “Integrating Robot Ethics and Machine Morality: The Study and Design of Moral Competence in Robots.” Ethics and Information Technology 18:243-256. DOI: https://doi.org/10.1007/s10676-015-9367-8.
  26. McLaren B. M. 2018. “Computational Models of Ethical Reasoning. Challenges, Initial Steps, and Future Directions.” In: M. Anderson & S. L. Anderson (Eds.), Machine Ethics (pp. 297-315). Cambridge: University Press.
  27. McCullough M. E., Kurzban R., & Tabak B. A. 2013. “Putting Revenge and Forgiveness in an Evolutionary Context.” Behavioral and Brain Sciences 36:41-58. DOI: https://doi.org/10.1017/S0140525X12001513.
  28. Millar J. 2017. “Ethics Settings for Autonomous Vehicles.” In: P. Lin, R. Jenkis, K. Abney (Eds.), Robot Ethics 2.0. From Autonomous Cars to Artificial Intelligence (pp. 20-34). Oxford: Oxford University Press.
  29. Mischel W. & Shoda Y. 1995. “A Cognitive-Affective System Theory of Personality: Reconceptualizing Situations, Dispositions, Dynamics, and Invariance in Personality Structure.” Psychological Review 102(2):246-268. DOI: https://doi.org/10.1037/0033-295X.102.2.246.
  30. Misselhorn C. 2018. Grundfragen der Maschinenethik. Stuttgart: Reclam.
  31. Moor J. H. 2006. “The Nature, Importance, and Difficulty of Machine Ethics.” IEEE Intelligent Systems 2:18-21. DOI:10.1109/MIS. 2006.80.
  32. Nowak E. 2016. “What Is Moral Competence and Why Promote It?.” Ethics in Progress 7(1):322-333. DOI: https://doi.org/10.14746/eip.2016.1.18
  33. Nowak, E. 2017. “Can Human and Artificial Agents Share an Autonomy, Categorical Imperative-Based Ethics and ‘Moral’ Selfhood?” Filozofia Publiczna I Edukacja Demokratyczna 6(2):169-208. DOI: https://doi.org/10.14746/fped.2017.6.2.20
  34. Prehn K., Wartenburger I., Meriau K., Scheibe Ch., Goodenough O. R., Villringer A., van der Meer E., & Heekeren H. R. 2008. “Individual Differences in Moral Judgment Competence Influence Neural Correlates of Socio-normative Judgments.” Social Cognitive and Affective Neuroscience 3:33-46.
  35. Rest J. R., Narváez D., Bebeau M. J., & Thoma S. J. 1999. Postconventional Moral Thinking. A Neo-Kohlbergian Approach. Mahwah, NJ: Erlbaum.
  36. Scheutz M., Briggs G., Cantrell R., Krause E., Williams T., & Veale R. 2013. “Novel Mechanisms for Natural Human-Robot Interactions in the DIARC Architecture.” Intelligent Robotic Systems: Papers from the AAAI 2013 Workshop: 66-72.
  37. Scheutz M. & Malle B. F. 2014. ‘‘Think and Do the Right Thing’: A Plea for Morally Competent Autonomous Robots.” Presented at the 2014 IEEE Ethics Conference, Chicago, IL. DOI: https://doi.org/10.1109/ETHICS.2014.6893457
  38. Scheutz M., Malle B. F., & Briggs G. 2015. „Towards Morally Sensitive Action Selection for Autonomous Social Robots.” Presented at the 2015 IEEE Ethics conference, Kobe, Japan. DOI: https://doi.org/10.1109/ROMAN.2015.7333661.
  39. Scheutz M. 2016. ‘‘The Need for Moral Competency in Autonomous Agent Architectures.” In: V. C. Müller (Ed.), Fundamental Issues of Artificial Intelligence (pp. 517-527). Heidelberg: Springer.
  40. Scheutz M. 2017 (Winter). “The Case for Explicit Ethical Agents.” AI Magazine 38(4):57-64. DOI: https://doi.org/10.1609/aimag.v38i4.2746
  41. Scheutz M., Baral C., & Lumpkin B. 2017. “A High Level Language for Human Robot Interaction.” Advances in Cognitive Systems 5:1-16.
  42. Schmiljun A. 2017. “Robot Morality: Bertram F. Malle’s Concept of Moral Competence.” Ethics in Progress 8 (2):69-79. DOI: https://doi.org/10.14746/eip.2017.2.6
  43. Schmiljun, A. 2018. “Why Can’t We Regard Robots As People?.” Ethics in Progress 9(1):44-61. DOI: https://doi.org/10.14746/eip.2018.1.3
  44. Steć M. 2017. “Is The Stimulation of Moral Competence with KMDD® Well-suited for Our Brain? A Perspective From Neuroethics.” Ethics in Progress 8(2):44-58. DOI: https://doi.org/10.14746/eip.2017.2.4
  45. Sullin J. P. 2006. “When Is a Robot a Moral Agent?.” International Review of Information Ethics 6(12):23-30.
  46. Ulgen O. 2017. “Kantian Ethics in the Age of Artificial Intelligence and Robotics.” QIL 43:59-83.
  47. Veruggio G., Solis J., & Van der Loos, M. 2011. “Roboethics: Ethics Applied to Robotics.” IEEE Robotics & Automation Magazine 18(1): 22-23.
  48. Wallach W. 2010. “Robot Minds and Human Ethics: The Need for a Comprehensive.” Ethics and Information Technology 12(3):243-250. DOI: https://doi.org/10.1007/s10676-010-9232-8.