Scroll Top

IMPLEMENTATION OF THE AI AS JUDGES

In recent decades, AI algorithms—also known as machine-learning algorithms—have evolved for use in a variety of applications, including judicial decision-making. Although many judges have not yet used AI, they are increasingly given the chance to do so. There are benefits and drawbacks to

INTRODUCTION

In recent decades, AI algorithms—also known as machine-learning algorithms—have evolved for use in a variety of applications, including judicial decision-making. Although many judges have not yet used AI, they are increasingly given the chance to do so. There are benefits and drawbacks to using AI algorithms in the courtroom. The purpose of this article is to explain how artificial intelligence is and may be used in judicial decision-making, what risks this implies, and how judges might address or reduce these concerns. In the modern day, judges, attorneys, and legal companies frequently employ artificial intelligence in their research. A fast redressal method raises concerns regarding the AI’s decision-making and order-passing abilities.

It is unlikely that technology will replace judges given how humans understand law and judgment. However, it’s anticipated that AI will help judges make decisions to a greater extent. Understanding the difficulties that occur in this supporting role has ramifications for the efficient use of AI by judges. It is unlikely that AI will ever entirely replace judges since judging is not a solo, automatable process. Deep learning finds patterns in data, whereas analogy-based reasoning[1] is used for adjudication. Similar to administrative law, areas of judicial decision-making where machine learning might be problematic are those where, at least initially, artificial intelligence is used more for forming judgments than just helping them.

ALGORITHMIC BIAS

Even those with the best intentions have hidden prejudices that might manifest themselves while making decisions. On occasion, though, algorithms are presented as tools for unbiased and fair decision-making. However, there are several instances where decision-making algorithms have produced biased results, demonstrating that the idea of algorithmic neutrality is false.[2] Addressing algorithmic bias is essential because, as AI increasingly helps judges, it may result in biased and incorrect decisions. Not because algorithms are ineffective for supporting people in making decisions, but rather because the idea that artificial intelligence will provide fair and unbiased outcomes is a myth. This is incorrect because algorithmic bias, which is both real and inescapable, requires a different approach to management than human prejudice. In AI, there are three types of prejudice that might result in inaccurate and discriminating results: prejudice in the algorithmic model-building process, prejudice in the training sample, and societal prejudices that the algorithm picks up and amplifies.

The first type of bias is a biased process. This is a bias in the data analysis performed by an algorithm. Biases typically happen in algorithmic processes because human attitudes are included in the algorithm. Even when no one directly makes the choice, there is often some level of human involvement in the process of obtaining the solution. Humans define the problem and choose what should be predicted by the algorithms before any data are analyzed.

The second kind is a sample of skewed data. The precision of the input data impacts how effectively an algorithm predicts. If an algorithm extracts data from a dataset that is not representative of the entire population for whatever reason, it will provide non-representative findings. For instance, records containing missing or erroneous data may have quality problems. The complete dataset could also not accurately reflect the population as a whole or have quality problems that are more common for a protected group as a whole than for others.[3]

The third category of algorithmic bias involves data that reflects societal preconceptions. A machine-learning algorithm’s training set may include instances of prior systematic discrimination. Therefore, an AI may still have a disproportionate influence or indirectly discriminate even when properly educated with representative data. [4] This type of bias varies from biased sample data in that, even when the data is representative of the broader public, it still has a disproportionate effect as a result of embedded social imbalances.

THE WAY FORWARD

Throughout the pandemic, the main points of contention have been, among other things, escalating employability issues, liquidity concerns, labour issues, and rental concerns. These are the issues that ADR may be needed to resolve. Currently, ADR is only used to settle business-related disputes. It is noteworthy that after the 2018 amendment to the Commercial Courts Laws, which obliged litigants to exhaust the options of pre-litigation settlement before bringing a business action, the success rate of commercial disputes handled at the pre-litigation stage has improved by 62%. AI may mediate disputes between the parties, which will ultimately lighten the load on the court system and speed up the administration of justice. Alternative conflict resolution has long been seen as a crucial component in delivering justice.[5]

By entering information into an online system that classifies their issues, provides information about their rights and advantages, and suggests settlement options, the use of AI would enable disputants to examine their challenges. It is obvious that some aspects of judicial work will be performed by technological means in the future, especially in areas where AI systems may be created. In this situation, advanced “branching” and data-searching technologies could already be in use to create vast decision trees that can provide resolutions to conflicts in legal advising and AI systems. Based on the given description, the computer can perform the requested actions. AI computer programs have been tested in experiments to predict the outcomes of situations based on textual data (predictive analysis).

Aletras and colleagues developed a programme that textually examined judgments from the European Court of Human Rights involving the infringement of human rights in order to find patterns within the opinions. The computer recognized these patterns and, on average, was 79% accurate in predicting the outcome of examples given to it in a structured style. It is possible for a computer system to “examine prior data to produce rules that are generalizable in the future”[6]  using machine learning. Using AI as an adjudicating officer in ADR might therefore represent a significant overhaul of India’s legal system.

CONCLUSION

As it is humanly impossible to accurately translate and transcribe laws and legislation into codes, functions, as well as commands that a computer perfectly understands and analyzes the nuanced contextual comprehension of legal language and deliver perfect output, it is a well-accepted fact that AI cannot replace judges in the court. In every court case, there are unique facts and significant legal issues that must be understood in addition to the legal issues. A decision to rely on such strict technology would never be wise. The Supreme Court noted in I.C. Golak Nath and Ors. v State of Punjab & Anr[7] that the law is dynamic and alters in response to societal demands. In some way or another, our judicial system uses discretion in a large number of judgments. AI will inevitably produce preset results. The Courts pass any decision or judgement that is only humanly conceivable by taking into consideration the values of the society, the subjective characteristics of the parties, and the current social conditions. Technology cannot take the place of moral principles and rational thought. In addition, we think that the use of AI in ADR as a medium for adjudication is a terrific way to help with information gathering and research in the legal field right now. However, no one is aware of the wider question of “whether” “when” and “to what degree” technology will change the judiciary.

Author(s) Name: Md. Tauseef Alam (Lloyd School of Law)

References:

[1] Cass R. Sunstein, “Of Artificial Intelligence and Legal Reasoning” (2001) 8 U. Chicago L. Sch. Roundtable 29 at 29, 31.

[2] See e.g. Julia Angwin & Jeff Larson, “Bias in Criminal Risk Scores is Mathematically Inevitable, Researchers Say” (30 December 2016)

[3] Ibid at 1402-1404

[4] Solon Barocas & Andrew D. Selbst, “Big Data’s Disparate Impact” (2016) 104:3 Cal. L. Rev. 671 at 673–674.

[5] K. Srinivas Rao v. D. A. Deepa, (2013) 5 SCC 226.

[6] Harry Surden, ‘Machine Learning and Law’ (2014) 89 Washington Law Review 87, 105.

[7]  I.C. Golak Nath And Ors. vs State Of Punjab And Anr, AIR 1967 SC 1643.