Scroll Top

UNVEILING BIAS IN AI AND NAVIGATING LANDSCAPE

Artificial Intelligence (AI) is more common now than ever, owing to the development of contemporary technology in the globalization era. In the future landscape, AI promises to play a major role but beneath its smooth exterior, bias lurks as the spectre of inefficiency

INTRODUCTION:

Artificial Intelligence (AI) is more common now than ever, owing to the development of contemporary technology in the globalization era. In the future landscape, AI promises to play a major role but beneath its smooth exterior, bias lurks as the spectre of inefficiency.

This blog aims to serve as a guide to explore this unexplored area and analyse the complexities that exist between AI bias and law. We will try to take a closer look at the algorithms to show how they support discrimination in a variety of ways. The blog also examines the moral conundrums that these concerns bring up, compelling us to consider the basic tenets of justice in the era of automation.

The main aim of the blog is to look at the efforts made to correct the imbalance alongside providing effective solutions to deal with this modern algorithmic issue to make it just, transparent and accountable.

THE PRESENCE OF AI AND ITS DANGERS:

In the modern age, AI is having an all-pervasive effect, wherein, it affects all forms of decision-making, especially in data-driven companies. This development has led to a number of instances that affect the basic tenets of legal framework like justice, equality and non-discrimination. Several of these are highlighted below along with their causes:

  1. Amazon had an experimental tool for hiring that used AI to rate job candidates. The rating was done by observing patterns in resumes submitted to the company over a 10-year period; most of which came from men. As a result, the system taught itself that male candidates are preferable over female candidates and rejected their applications without looking into its merits. The company ultimately decided to disband the system altogether because of its gender bias.[1]
  2. Uber and Lyft have a dynamic pricing system wherein the AI used charges a higher price per mile if the pick-up point or destination is a neighbourhood with a higher proportion of ethnic minority residents than for those with predominantly white residents. The algorithm operators were not able to provide any reasonable justification for the same and it was interpreted to have emanated from the biased data set fed into it.[2]
  3. Google uses a tool called Ad Fisher which tracks user behaviour on Google in order to show personalized ads to all its users. On research, it was ascertained that when Google presumed a user to be a male job seeker, he was given recommendations for higher-paying executive jobs. This bias indicates that either the advertisers are requesting that high-paying jobs’ ads be only displayed to men or that some type of bias has been programmed into the algorithm.[3]

All these instances demonstrate that AI bias may lead to inaccurate decisions and distorted outputs, which in turn could have serious consequences in the real world. These biases have numerous detrimental effects for all under its cover. The major common issue observed in all the AI bias instances is related to data. AI learns bias from the data it is trained on, which implies that researchers need to be extremely careful while gathering and treating data. The data collection needs to be ethical. Special care needs to be taken to protect the algorithms from being trained on unrepresentative or incomplete data sets as they rely on flawed information that reflects historical inequalities.[4]

LEGAL FRAMEWORK

There is a lack of proper legal framework to address the issues emanating from AI Bias. Several countries are moving forward, to formulate some solid legislation in this respect. Moreover, the International Organisation and Committees have also stepped in to provide guidelines and recommendations on ways to tackle these issues. Let us explore some of these framework initiatives:

  1. EU AI Act[5]seeks to improve the environment for AI research and application. Depending on the risk posed by AI, these regulations impose obligations on all parties involved. For instance, they outlaw the classification of individuals based solely on their socio-economic status, behaviour, or other personal traits, as these pose a threat to public safety. In this act, however, there might be some exclusions made exclusively for law enforcement.
  2. The Algorithmic Accountability Act of 2023[6]aims to create transparency and empower consumers to make informed decisions while dealing with these systems. The existence of such an act is indispensable in the absence of any other safeguard.
  3. New York City’s AI Bias Law[7]is majorly concerned with regulating AI’s use in practices related to HR activities in a company and specifically deals with targeting the use of automated employment decision tools (AEDTs) by employers, wherein, considering certain exceptions its use is prohibited. The law also provides for the conduct of annual independent bias audits in order to ensure compliance.
  4. AI Bill of Rights[8] proposes taking up ‘proactive and continuous measures’ like equity assessments, independent evaluations, and documentation of the reports by the people deploying AI systems, in order to ensure their fairness.
  5. OECD’s recommendations on AI[9]have principles for responsible stewardship of trustworthy AI focusing on inclusive growth along with mechanisms to ensure transparency and accountability.

PROBABLE WAYS TO DEAL WITH AI BIAS:

  1. Anti-discrimination laws–AI systems that display bias may be subject to existing anti-discrimination laws and civil rights laws that forbid discrimination based on gender, race, or religion. For instance, a company utilizing an AI system for hiring may face legal action for breaking anti-discrimination laws if it is discovered that the system is discriminating against women.[10]
  2. Fairness Guidelines and Regulations–The government needs to ensure that proper hard law liability rules and guidelines are formulated for AI systems’ fairness. AI developers may be required to take measures to reduce AI bias, like auditing AI systems for bias and using a variety of datasets.[11]
  3. Transparency and Accountability–It should be feasible to comprehend how AI systems make decisions by having them be transparent and explicable. This would facilitate the process of locating and resolving bias in AI systems.[12]
  4. Cause of Action–If biased AI systems cause harm, people may be able to sue AI developers or users. For instance, a person may sue the bank that employed the biased AI system if they are refused a loan as a result of it.[13]
  5. Legal status of AI – A legal personality or status of AI needs to be determined in order to apply vicarious liability to the company, which otherwise is absolved of the liability.[14]

CONCLUSION

Artificial Intelligence being the new boon for the easy traversal of tasks through all the spheres, there lurks behind its hidden biases which could have unforeseen or distorted outputs. We are at a pivotal point in the complicated maze of AI and law. One route goes uncontrolled, where unbridled AI magnifies societal biases, sustaining discrimination and undermining confidence. The alternative route stands out, guided by legislative frameworks designed to counteract AI bias. This route necessitates alertness and calls for accountability, transparency, and explanation from AI developers and users. Adopting this course calls for more than just laws and rules. Governments, researchers, developers, and users must work together to create a complex web of ethical guidelines and oversight procedures. We need to move forward to use unbiased algorithms, diverse datasets, and thorough audits as our weapons to drive out bias from AI’s silicon cells.

Author(s) Name: Yashi Agarwal (National Law University, Delhi)

Reference(s):

[1]Jeffrey Dastin, ‘Insight – Amazon scraps secret AI recruiting tool that showed bias against women’ (Reuters, 11 October, 2018) <https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G/>accessed 16 January 2024.

[2]Donna Lu, ‘Uber and Lyft pricing algorithms charge more in non-white areas’(New Scientist, 1971) <https://www.newscientist.com/article/2246202-uber-and-lyft-pricing-algorithms-charge-more-in-non-white-areas/>accessed 16 January 2024.

[3]Julia Carpenter, ‘Google’s algorithm shows prestigious job ads to men, but not to women. Here’s why that should worry you’ (Washington Post, 2015)<https://www.washingtonpost.com/news/the-intersect/wp/2015/07/06/googles-algorithm-shows-prestigious-job-ads-to-men-but-not-to-women-heres-why-that-should-worry-you/> accessed 16 January 2024.

[4]‘Addressing Bias in Artificial Intelligence’ (Thomson Reuters, 2023) <https://www.thomsonreuters.com/en-us/posts/wp-content/uploads/sites/20/2023/08/Addressing-Bias-in-AI-Report.pdf> accessed 16 January 2024.

[5]EU AI Act [2023].

[6] Algorithmic Accountability Act [2023].

[7]‘Understanding NYC’s AI Bias Law: Impact on Employment Decisions’ (Barclay Damon, 2023) <https://www.barclaydamon.com/alerts/understanding-nycs-ai-bias-law-impact-on-employment-decisions> accessed 16 January 2024.

[8] AI Bill of Rights [2023].

[9]‘Recommendations of the Council on Artificial Intelligence’ (OECD, 2023).<https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449> accessed 16 January 2024

[10] ‘Foundation models: Opportunities, risks and mitigations’ (IBM, 2023) <https://www.ibm.com/downloads/cas/E5KE5KRZ> accessed 16 January 2024.

[11] ‘Minimising AI bias: Best practices for organisations’ (British Council, 2023) <https://corporate.britishcouncil.org/insights/minimising-ai-bias-best-practices-organisations> accessed 16 January 2024.

[12]Nicol Turner Lee, Paul Resnick, and Genie Barton, ‘Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms’ (Brookings, 2019) <https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/> accessed 16 January 2024.

[13] James Manyika, Jake Silberg, and Brittany Presten, ‘What do we do about the biases in AI’ (Harvard Business Review, 2019) <https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai> accessed 16 January 2024.

[14] Kirsten Martin, ‘Algorithmic Bias and Corporate Responsibility: How Companies Hide Behind the False Veil of the Technological Imperative (2021).<https://books.google.co.in/books?hl=en&lr=&id=E51kEAAAQBAJ&oi=fnd&pg=PA36&dq=data+driven+companies+and+their+AI+bias&ots=INixbYDDJ3&sig=xmrvYUCLT8OBwdJ8tafIqu285r0&redir_esc=y#v=onepage&q&f=true> accessed 16 January 2024.