Liability for harm resulting from the operation of artificial intelligence
Journal cover Ruch Prawniczy, Ekonomiczny i Socjologiczny, volume 88, no. 1, year 2026
PDF (Język Polski)

Keywords

artificial intelligence
high-risk AI systems
risk management
liability for harm
product defect

How to Cite

Kubera, P. (2026). Liability for harm resulting from the operation of artificial intelligence . Ruch Prawniczy, Ekonomiczny I Socjologiczny, 88(1), 87–106. https://doi.org/10.14746/rpeis.2026.88.1.05

Number of views: 0


Number of downloads: 0

Abstract

Artificial intelligence (AI) increasingly accompanies us in our professional and private lives. Alongside its undeniable benefits, it can also be a source of harm. The purpose of this article is to present the issue of legal liability for damage resulting from the operation of AI systems. The problem is significant because research indicates that liability for damage caused by AI constitute one of the key external obstacles to the adoption of artificial intelligence. The article identifies major challenges in determining liability, including: autonomy, constant adaptation, limited predictability, and lack of transparency. Using primarily the dogmatic-legal method, it outlines the general EU approach to regulating AI systems, which is based on risk analysis and management of the entire product life cycle, and presents the EU legal framework for liability for AI-related harm. The framework provides a two-track procedure for pursuing claims: under the liability of economic operators for defective products and under national tort liability systems based on the principle of fault. The article contributes to the discussion on the optimal regime of liability for harm caused by AI systems and evaluates the impact of these rules from the perspective of innovative enterprises and users. The study identifies key problem areas, including: partial discretion in classifying AI systems as high-risk, the regulatory focus on product safety risks with insufficient attention to other fundamental rights, and the lack of harmonization of national tort regimes resulting in different evidentiary standards adopted by national courts.

https://doi.org/10.14746/rpeis.2026.88.1.05
PDF (Język Polski)

References

Bożek, B., i Jakubiec, M. (2017). On the legal responsibility of autonomous machines. Artificial Intelligence and Law, 25(3), 293–304.

Buiten, M. (2024). Product liability for defective AI. European Journal of Law and Economics, 57, 239–273.

Buiten, M., de Streel, A., i Peitz, M. (2023). The law and economics of AI liability. Computer Law & Security Review, 48, 105794.

De Bruyne, J., Dheu, O., i Ducuing, C. (2023). The European Commission’s approach to extracontractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive. Computer Law & Security Review, 51, 105894.

Derave, C., Genicot, N., i Hetmanska, N. (2022). The risks of trustworthy artificial intelligence: The Case of the European travel information and authorisation system. European Journal of Risk Regulation, 13(3), 389–420.

Faure, M. G. (2014). The complementary roles of liability, regulation and insurance in safety management: Theory and practice. Journal of Risk Research, 17(6), 689–707.

Følstad, A., Larsen, A., i Bjerkreim-Hanssen, N. (2023). The human likeness of government chatbots – An empirical study from Norwegian municipalities. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 14130 LNCS, 111–127.

García-Gómez, J, Blanes-Selva, V., i Doñate-Martínez, A. (2024). Proposing an AI passport as a mitigating action of risk associated to artificial intelligence in healthcare. W: J. Mantas, A. Hasman, G. Demiris, K. Saranto, M. Marschollek, T. Arvanitis, I. Ognjanović, A. Benis, P. Gallos, E. Zoulias i E. Andrikopoulou (red.) Digital health and informatics innovations for sustainable health care systems (s. 537–551). Proceedings of MIE 2024. IOS Press.

Gawroński, M. (2023). Wyjaśnialność SI w fazie wykorzystania – aspekty prawne i techniczne. W: Projektowanie systemów SI zgodnie z RODO. Materiały pokonferencyjne (s. 26–37). Urząd Ochrony Danych Osobowych.

Golpayegani, D., Pandit, H., i Lewis, D. (2023). To be high-risk, or not to be – semantic specifications and implications of the AI Act’s high-risk AI applications and harmonised standards. W: 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023 (s. 905–915).

Hacker, P. (2023). The European AI liability directives – Critique of a half-hearted approach and lessons for the future. Computer Law & Security Review, 51, 105871.

Hibiki, A. (2024). Precaution against environmental accidents and liability rules for damages. W: T. Arimura i A. Hibiki (red.), Introduction to Environmental Economics and Policy in Japan (s. 75–87). Springer.

Hiriart, Y., Martimort, D., i Pouyet, J. (2004). On the optimal use of ex ante regulation and ex post liability. Economics Letters, 84(2), 231–235.

Jacob, J. (2015). Innovation in risky industries under liability law: The case of double-impact innovations. Journal of Institutional and Theoretical Economics (JITE) / Zeitschrift für die Gesamte Staatswissenschaft, 171(3), 385–404.

Knott, A., Pedreschi, D., Jitsuzumi, T., i Bengio, Y.(2024). AI content detection in the emerging information ecosystem: New obligations for media and tech companies. Ethics and Information Technology, 6(63).

Komisja Europejska. (2020a). Directorate-General for Communications Networks, Content and Technology, European enterprise survey on the use of technologies based on artificial intelligence – Final report. Publications Office. https://data.europa.eu/doi/10.2759/759368European

Komisja Europejska. (2020b). Biała księga w sprawie sztucznej inteligencji Europejskie podejście do doskonałości i zaufania. Bruksela, dnia 19.2.2020 r. COM(2020) 65 final.

Księżak P., i Wojtczak S. (2020). Prawo autorskie wobec sztucznej inteligencji (próba alternatywnego spojrzenia). Państwo i Prawo, 75(2), 18–33.

Kuźmicka-Sulikowska, J. (2011). Zasady odpowiedzialności deliktowej w świetle nowych tendencji w ustawodawstwie polskim. Wolters Kluwer Polska.

Li, S., i Schütte, B. (2023). The proposal for a revised Product Liability Directive: The emperor’s new clothes? Maastricht Journal of European and Comparative Law, 30(5), 573–596.

Maggi, G. i Ossa, R. (2023). The political economy of international regulatory cooperation. American Economic Review, 113(8), 2168–2200.

Michalak, A. (2021). Projekt rozporządzenia Parlamentu UE o odpowiedzialności cywilnej za działania systemów sztucznej inteligencji – krok w dobrym kierunku czy niepotrzebne odstępstwo od zasad ogólnych? W: B. Fischer, A. Pązik i M. Świerczyński (red.), Prawo sztucznej inteligencji i nowych technologii (s. 41–50). Wolters Kluwer Polska.

Panigutti, C., Hamon, R., Hupont, I., Llorca, D., Yela, D., Junklewitz, H., Scalzo, S., Mazzini, G., Sanchez, I., Garrido, J., i Gomez, E. (2023). The role of explainable AI in the context of the AI Act. W: FAccT’23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (s. 1139–1150).

Rauccio, C. (2021). Artificial intelligence and genomics: The data protection implications in the use of AI for genomic diagnostics. European Journal of Privacy Law and Technologies, 1, 115–141.

Ren, Q., i Du, J. (2024). Harmonizing innovation and regulation: The EU Artificial Intelligence Act in the international trade context. Computer Law & Security Review, 54, 106028.

Rouillon, S. (2008). Safety regulation vs. liability with heterogeneous probabilities of suit. International Review of Law and Economics, 28(2), 133–39.

Shavell, S. (1984). A model of the optimal use of liability and safety regulation. Rand Journal of Economics, 15, 271–280.

Shavell, S. (2005). Economics and liability for accidents: New Palgrave dictionary of economics (2nd Edition, 2008. Harvard Law and Economics Discussion Paper, 535. https://ssrn.com/abstract=870565

Skjuve, M., Haugstveit, I., Følstad, A., i Brandtzaeg, P. (2019) Help! Is my chatbot falling into the uncanny valley? An empirical study of user experience in human-chatbot interaction. Human Technology, 15(1), 30–54.

Spindler, G. (2023). Different approaches for liability of artificial intelligence – pros and cons – the new proposal of the EU Commission on liability for defective products and AI systems. SSRN.

Szymielewicz, K. (2024). Nieśmiała kontrola. Tygodnik Powszechny. Wydanie specjalne, 3–4, 63–66.

Świerczyński, M., i Więckowski, W. (2023). Liability for damages caused by artificial intelligence systems – main challenges to be addressed by the European Union conflict-of-laws regulations. Prawo w Działaniu, 54, 200–206.

Warner, R., i Sloan, R. (2021). Making artificial intelligence transparent: Fairness and the problem of proxy variables. Criminal Justice Ethics, 40(1), 23–39.

Yampolskiy, R. (2020). Unpredictability of AI: On the impossibility of accurately predicting all actions of a smarter agent. Journal of Artificial Intelligence and Consciousness, 7(1), 109–118.