Abstract
Should ChatGPT be viewed merely as a supportive tool for writers, or does it qualify as a co-author? As ChatGPT and similar language models are likely to become more prevalent in assisting with academic writing and research, it seems that we will face with two possibilities: an increase in ghostwriting that could finally undermine the integrity of the knowledge system, or the need to theoretical preparation to recognize the role of non-human contributors. Drawing on Actor-Network Theory, this article examines the question of whether this Chatbot meets, in principle, the requirements for co-authorship. Answering this question in affirmative, it delves into philosophical discussions concerning the agency, moral agency, and moral accountability of such technological entities.
References
Allen C. & Wallach W. 2009. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780195374049.001.0001
Callon M. 1980. “Struggles and Negotiations to Define What Is Problematic and What Is Not: The Socio-Logic of Translation,” in K. D. Knorr, R. Krohn, & R. Whitley (Eds.), The Social Process of Scientific Investigation (pp. 197–219). Dordrecht, the Netherlands: Reidel. DOI: https://doi.org/10.1007/978-94-009-9109-5_8
Callon M. 1984. “Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay,” Sociological Review 32(S1):196–233. DOI: https://doi.org/10.1111/j.1467-954X.1984.tb00113.x
Callon M. 1986. “The Sociology of an Actor-Network: The Case of the Electric Vehicle,” in M. Callon M., Law J., & Rip A. (Eds.), Mapping the Dynamics of Science and Technology: Sociology of Science in the Real World (pp. 19–34). Houndmills – Basingstoke – Hampshire – London, UK: Macmillan. DOI: https://doi.org/10.1007/978-1-349-07408-2_2
Cole D. 2004. “The Chinese Room Argument,” The Stanford Encyclopedia of Philosophy. Available online at: https://plato.stanford.edu/entries/chinese-room/ (accessed on April 1, 2024).
Dennett D. 1991. Consciousness Explained. Allen Lane: The Penguin Press.
Else H. 2023. “Abstracts Written by ChatGPT Fool Scientists,” Nature 613, art. no. 423. DOI: https://doi.org/10.1038/d41586-023-00056-7
Editors of Nature. 2023. “Correction to: Can Artificial Intelligence Help for Scientific Writing?” Available online at: https://ccforum.biomedcentral.com/articles/10.1186/s13054-023-04390-0 (accessed on April 15, 2024).
Fischer J. M. & Ravizza M. 1998. Responsibility and Control: A Theory of Moral Responsibility, Cambridge: Cambridge University Press. DOI: https://doi.org/10.1017/CBO9780511814594
Fischer J. M. 2004. “Responsibility and Manipulation,” The Journal of Ethics 8(2):145–177. DOI: https://doi.org/10.1023/B:JOET.0000018773.97209.84
Floridi L. & Sanders J. W. 2004. “On the Morality of Artificial Agents,” Minds and Machines 14(3):349–379. DOI: https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Frankfurt H. G. 1969. “Alternate Possibilities and Moral Responsibility,” The Journal of Philosophy 66(23):829–839. DOI: https://doi.org/10.2307/2023833
Hutson M. 2022. “Could AI Help You to Write Your Next Paper?” Nature Research 611:192–193. DOI: https://doi.org/10.1038/d41586-022-03479-w
Latour B. 1987. Science in Action: How to Follow Scientists and Engineers through Society. Harvard University Press.
Latour B. (Jim Johnson) 1988. “Mixing Humans and Nonhumans Together: The Sociology of a Door-Closer,” Social Problems 35(3):298–310 (Special Issue: The Sociology of Science and Technology). DOI: https://doi.org/10.2307/800624
Latour B. 1991. We Have Never Been Modern. Trans. C. Porter. Cambridge, MA: Harvard University Press.
Latour B. 1994. “On Technical Mediation,” Common Knowledge 3(2):29–64.
Latour B. 1999. Pandora’s Hope, Essays on the Reality of Science Studies. Cambridge, MA: Harvard University Press.
Latour B. 2002. “Morality and Technology, The End of the Means,” Theory, Culture, and Society 19(5):247–260. DOI: https://doi.org/10.1177/026327602761899246
Latour B. 2005. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press: Oxford. DOI: https://doi.org/10.1093/oso/9780199256044.001.0001
Law J. 1986. Power, Action, and Belief: A New Sociology of Knowledge? London: Routledge & Kegan Paul.
Mandal J. & Parij S. C. 2013. “Ethics of Authorship in Scientific Publications,” Tropical Parasitology 3(2):104–105. DOI: https://doi.org/10.4103/2229-5070.122108
Martindale J. 2023. “These Are the Countries Where ChatGPT Is Currently Banned.” Available online at: https://www.digitaltrends.com/computing/these-countries-chatgpt-banned/#:~:text=It%20was%20banned%20after%20the,Data%20Protection%20Regulation%20(GDPR) (accessed on May 18, 2023).
Rennie D., Yank V., & Emanuel L. 1997. “When Authorship Fails. A Proposal to Make Contributors Accountable,” JAMA 278:579–585. DOI: https://doi.org/10.1001/jama.278.7.579
Resnik D. 1997. “A Proposal for a New System of Credit Allocation in Science,” Science and Engineering Ethics 3:237–243. DOI: https://doi.org/10.1007/s11948-997-0023-5
Salvagno M., Chat GPT, Taccone F. S., & Gerli A. G. 2023a. “Can Artificial Intelligence Help for Scientific Writing?” Critical Care 27:75.
Salvagno M., Taccone F. S., & Gerli A. G. 2023b. “Can Artificial Intelligence Help for Scientific Writing?” Critical Care 27:79. DOI: https://doi.org/10.1186/s13054-023-04380-2
Sample I. 2023. “Science Journals Ban Listing of ChatGPT as Co-Author on Papers,” Guardian. Available online at: https://www.theguardian.com/science/2023/jan/26/sc
Searle J. 1980. “Minds, Brains and Programs,” Behavioral and Brain Sciences 3(3):417–457. DOI: https://doi.org/10.1017/S0140525X00005756
Searle J. 1984. Minds, Brains and Science. Cambridge: Harvard University Press.
Shamoo Adil E. & Resnik D. B. 2009. Responsible Conduct of Research. 2nd Edition. Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780195368246.001.0001
Sharifzadeh R. 2020. “Do Artifacts Have Morality? Bruno Latour and Ethics of Technology,” Philosophy of Science 9(18):75–93.
Shukla N. 2024. “LLMs vs. Traditional Language Models: A Comparative Analysis.” Available online at: https://www.appypie.com/blog/llms-vs-traditional-language-models
Singh S. 2022. “What Are Large Language Models & Its Applications.” Available online at: https://www.labellerr.com/blog/an-introduction-to-large-language-models-llms/
Strawson P. F. 1962. “Freedom and Resentment,” Proceedings of the British Academy 48:1–25.
Timpe K. 2008. Free Will: Sourcehood and Its Alternatives. London – New York: Continuum.
Turing A. 1950. “Computing Machinery and Intelligence,” Mind 59(236):433–460. DOI: https://doi.org/10.1093/mind/LIX.236.433
Verbeek P. P. 2011. Moralizing Technology: Understanding and Designing the Morality of Things. Chicago – London: University of Chicago Press. DOI: https://doi.org/10.7208/chicago/9780226852904.001.0001
Waelbers K. & Dorstewitz P. 2014. “Ethics in Actor Networks, or: What Latour Could Learn from Darwin and Dewey,” Science and Engineering Ethics 20(1):23–40. DOI: https://doi.org/10.1007/s11948-012-9408-1
Wolf S. 1990. Freedom Within Reason. New York: Oxford University Press. DOI: https://doi.org/10.1093/oso/9780195056167.001.0001
Zhuo T. Y. Yujin Huang, Chunyang Chen, Zhenchang Xing 2023. “Exploring AI Ethics of ChatGPT: A Diagnostic Analysis. Computation and Language,” Arxiv. DOI: arxiv-2301.12867 (pre-print).
Zohery M. 2023. “ChatGPT in Academic Writing and Publishing: A Comprehensive Guide,” in Artificial Intelligence in Academia, Research and Science: ChatGPT as a Case Study Edition (pp. 10–61). London: Achtago Publishing.
License
Copyright (c) 2024 Rahman Sharifzadeh
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.