Civil liability for artificial intelligence products versus the sustainable development of CEECs: which institutions matter?

Main Article Content

Małgorzata Godlewska
Sylwia Morawska
Przemysław Banasik

Abstrakt

The aim of this paper is to conduct a meta-analysis of the EU and CEECs civil liability institutions in order to find out if they are ready for the Artificial Intelligence (AI) race. Particular focus is placed on ascertaining whether civil liability institutions such as the Product Liability Directive (EU) or civil codes (CEECs) will protect consumers and entrepreneurs, as well as ensure undistorted competition. In line with the aforementioned, the authors investigate whether the civil liability institutions of the EU and CEECs are based on regulations that can be adapted to the new generation of robots that will be equipped with learning abilities and have a certain degree of unpredictability in their behaviour. The conclusion presented in the paper was drawn on the basis of a review of the current literature and research on national and European regulations. The primary contribution that this article makes is to advance the current of the research concerning the concepts of AI liability for damage and personal injury. A second contribution is to show that the current civil liability institutions of the EU as well as the CEECs are not sufficiently prepared to address the legal issues that will  start to arise when self-driving vehicles or autonomous drones begin operating in fully autonomous modes and possibly cause property damage or personal injury.

Downloads

Download data is not yet available.

Article Details

Dział
ARTYKUŁY - Ekonomia

Bibliografia

    Accenture (2016). Why Artificial Intelligence is the future of growth. [accessed 4 October 2019].
    Amiot, M. (2016). Robonomics – How automation will change work. [accessed 4 October 2019].
    Arntz, M., Gregory, T., Zierahn U. (2016). The risk of automation for jobs in OECD countries: a comparative analysis. OECD Social, Employment and Migration Working Papers No. 189. Paris: OECD Publishing. .
    Borys, T. (2011). Sustainable development – how to recognize integrated order. Problems of Sustainable Development 6(2): 75–81.
    Čerka, P., Grigienė, J., Sirbikytė, G. (2015). Liability for damages caused by artificial intelligence. Computer Law & Security Review 31(3): 376–389.
    Duggal, P. (2017). Artificial Intelligence Law. Kindle Edition.
    Lea, G. (2015). Who’s to blame when artificial intelligence system go wrong? [accessed 4 October 2019].
    Lichtenstein, J. (2017). Have You Been Injured by An AI Robot? – European Commission Recommends AI Robots Have Legal Status So They Can Be Sued. [accessed 4 October 2019].
    Maldonado, J. (2018). Legal Ethics: The Ethical Dilemma of Artificial Intelligence. [accessed 4 November 2018].
    McKinsey Global Institute (2013). Disruptive technologies: Advances that will transform life, business, and the global economy. [accessed 4 October 2019].
    Pagallo, U. (2018). Vital, Sophia, and Co. – the quest for the legal personhood of robots. Information 9(9): 230–241.
    Polinsky, A. M., Rubinfeld, D.L. (1988). The welfare implications of costly litigation in the theory of liability. Journal of Legal Studies 17: 151–164.
    Schwab, K., Davis, N. (2018). Shaping the Fourth Industrial Revolution. World Economic Forum. Kindle Edition.
    The Economist (2018). How Europe can improve the development of AI. [accessed 4 October 2019].
    Vladeck, D.C. (2014). Machines without principals: liability rules and artificial intelligence.Washington Law Review 89(1): 117–150.
    Wong, A. (2017). Who is liable when robots and AI get it wrong? [accessed 4 October 2019].