INTRODUCTION
The artificial intelligence market is estimated to be worth over $3680.47 billion by 2034, expanding at a CAGR of 19.1% from 2024 to 2034, this data highlights the significant growth and widespread adoption of Artificial Intelligence (AI). This expansion of AI, however, has been a steady progression rather than a sudden surge. This concept of artificial beings dates back to ancient myths and stories with philosophers like Aristotle who pondered on the idea of automating human thinking since the early days of AI history, some computer scientists have always strived to make machines as intelligent as humans. The field of AI research was formally founded in 1956 at the Dartmouth conference wherein the term was also coined. Research and development went on in full swing during the 1960s-1970s slowing down briefly during the mid-1970s and then gaining momentum again during the 1980s. Ever since then to the present times, numerous advanced changes have taken place in AI making it more accessible for ordinary individuals and also subsequently exposing them to more nuanced cyber-attacks. This increased usage and the increased susceptibility of people to cyber threats, calls for exploring the complexities of assigning legal liability for the offenses committed by AI systems and regulatory frameworks and laws for the same. This also forms the premise of this blog, which will focus on analysing the existing legal frameworks and delving into the theories of liability, the role of developers, and user accountability.
THEORIES OF LIABILITY
AI despite its immense utility, is prone to errors. For instance, a driverless car might cause an avoidable accident, an AI algorithm assessing mortgage applications could introduce biases based on race or caste, or an AI-assisted surgical tool might make a harmful decision during an operation. So, while AI generally performs its tasks accurately, failures are inevitable.
These failures, as can be understood from the above examples pose physical safety risks, fundamental safety risks (like in the above example discrimination can happen based on caste, race, etc.), cybersecurity risks, and even pure economic risks (an example of which can be, when AI influences consumers to buy overpriced products, causing financial loss to them which is not connected to safety risks and thus qualifies as pure economic risks.)
When these failures occur leading to such risks, questions about responsibility and remedies arise, particularly regarding who is at fault when an AI system causes harm.
To address these questions, we will first look at a few of the existing liability regimes:
- Fault liability: Commonly used in tort law, herein a valid reason is required to transfer the loss from the victim to the tortfeasor, typically based on the tortfeasor’s fault, which can range from intent to various degrees of negligence. Jurisdictions use different methods to keep liability within reasonable limits, frequently requiring that the tortfeasor’s conduct should be objectionable, violating the law, or public policy, or infringing vital human rights and legally protected interest4. Consider the case of Oriental Insurance Co. Ltd. V. Hansrajbhai v. Kodala & Ors (2001) wherein the Supreme Court held that compensation under Section 163-A of the Motor Vehicle Act is in addition to the fault liability and not an alternative.
- Non-Compliance Liability: This liability is caused when there is a breach of particular laws or particular standards which are set up to prevent a harm of particular type at hand. This type of liability can arise in terms of AI when it does not comply with the regulatory rules/procedures as proposed by the government 4. An example can be the Consumer Protection Act 2019 which includes provisions for penalties in case of non-compliance with consumer rights.
- Product Liability: Herein a complete fault on the producer’s part is not required and instead even a shortcoming on the producer’s part makes them liable for any harm or damages under this liability regime 4. Statutes like the Bureau of Indian Standards Act, 2016 lays down the standards for goods and services, useful in determining product liability.
- Strict Liability: Herein the liability is invoked purely based on causation instead of any deficiency in product or services. It is imposed in such situations where substantial harm would be suffered by the victim even if there is an absence of any fault or any identifiable defect or any other non-compliance 4.
The primary issue with the existing liability schemes is their insufficiency in addressing the challenges presented by AI. These schemes predominantly concentrate on Safety risks, neglecting the fundamental risks. However, with the widespread adoption of AI, new and more comprehensive regulations are being proposed and implemented to address these emerging issues.
EXISTING LAWS/REGULATORY FRAMEWORKS REGARDING AI
Around the world, various countries have rolled out regulatory frameworks for AI developers to follow and the EU has also passed the first act, aimed at regulating AI, known as the EU AI Act. In the EU AI Act, there is a classification of AI based on the risk they pose. Artificial Technology placed in the unacceptable risk category will be banned, examples of which are social scoring (classification of people based on personal attributes) and cognitive behavioral manipulation of people or specific vulnerable groups (e.g.- voice-activated toys for children). Chatbots were placed in the category of limited risks with separate regulations designed for them that they need to comply with. Japan on the other, amended its copyright act in 2018 and added an exception applicable to AI training, making it one of the most relaxed copyrights. It does have, however, a system of hard law regulations for high-risk AI and a system of soft law regulations for low-risk AI. India’s National Strategy for Artificial Intelligence released in 2018, outlined a plan to integrate AI into five public sectors, ensuring its implementation is safe and beneficial for all citizens. Recently, Union Minister Rajeev Chandrasekhar emphasized in June 2023 that the Centre’s approach to AI regulation will focus on user harm or potential harm derived from any technology. Instead of regulating specific use cases, the strategy will involve creating regulatory guardrails to ensure the safe use of AI across all platforms. These guardrails are crucial for building trust and collaboration among AI platforms, financial institutions, and customers, ensuring that AI development and deployment are fair, just, accurate, and appropriate. There is no risk-based categorization of AI systems in India.
WHO SHOULD BE HELD LIABLE: THE DEVELOPERS, THE USERS OR THE GOVERNMENT?
In the above discussion, we looked at the existing liability regimes and the laws and regulations relating to AI. There was, however, an absence of a discussion about who should be held liable for the harm caused by AI. The existing liability regimes and regulations all tend towards placing the burden of liability on the developers mostly. However, today the access to AI is very wide, so solely placing all the burden on the developers without placing any liability or duty of care towards the end users would not efficiently mitigate the potential misuse of AI. This is so because, while developers have to be careful in creating and distributing their products, its also the duty of the users to use the product carefully without negligence and ensure that their usage of the AI system does not lead to harm to another. For example, suppose there is an AI software that creates hyper-realistic images and was developed to help people create hyper-realistic images for comics, advertisements, etc. economically. However, now using this Software some users started creating fake images of other people without their consent causing harm to them. In this case, placing the liability solely on the developers and none on the users would not help. In a scenario where both the creator and the users are essential components of the working of a system, it becomes imperative to frame liability regimes and laws and regulations that call for a shared responsibility of the components to ensure the smooth working of the said system. This also means that in case such adequate laws are absent, then the government should be held liable for failing to take appropriate measures required for safeguarding the interest of the various parties involved.
CONCLUSION
With the new advancements and the evermore easy accessibility to AI, it’s now of utmost importance that laws are put in place regulating it. This however will not be possible unless it is unequivocally decided upon as to who is responsible for the harm caused by AI.
From the above discussion, it can be said that the developers, the users, and even the government need to be held responsible and liable for the harm caused by AI, depending upon the case. Thus, laws should be formulated such that the liability is placed on all three components and regulations are stringent enough to protect the interest of the users without unnecessarily restricting the creative formulations of the developers.
Author(s) Name: Medha Arora (Maharashtra National Law University, Nagpur)