Scroll Top

IMPACT OF AI ON ELECTRONIC EVIDENCE

In this blog, we will be discussing the impact of AI-generated evidence on the justice delivery system. Let’s begin with an example, suppose Mr. X, a renowned individual, is accused of a criminal offence. Now if the complainant is generating fake electronic evidence with the help of AI, what will

INTRODUCTION

In this blog, we will be discussing the impact of AI-generated evidence on the justice delivery system. Let’s begin with an example, suppose Mr. X, a renowned individual, is accused of a criminal offence. Now if the complainant is generating fake electronic evidence with the help of AI, what will be the consequences? How the courts of law will distinguish between authentic, original evidence from a deep fake? Let’s delve into the contemporary situation concerning this subject matter.

The Indian Evidence Act’s evidentiary provisions

According to Sec 3 of The Indian Evidence, 1872 “Evidence” means and includes ––

 (1) all statements that the Court permits or requires to be made before it by witnesses, in relation to matters of fact under inquiry; such statements are called oral evidence; 

(2) All documents including electronic records produced for the inspection of the Court; such documents are called documentary evidence[1]

In simple language, evidence is the information that supports the facts and claims of the case. It can be in the form of oral testimony, printed documents or stored in electronic devices. Evidence is used to verify the truthfulness of the arguments. Electronic documents can be used as primary and secondary evidence. According to section 65B of the Indian Evidence Act, 1872 primary evidence means records in original form which serves as strong evidence. Secondary evidence means the copied version of the record along with a certificate of authentication[2]CD, DVD, chip, hard drive, Memory Chip, and Pen Drive are admissible as primary evidence[3].

AI-GENERATED CONTENT: A THREAT TO ELECTRONIC EVIDENCE?

We have been relying on digital images and videos as evidence since the very inception of these digital storable data. Digital evidence possesses a concrete stand in proving or disproving the facts[4]. But now the contention arises when AI is generating images and footage that never existed in real life. What will happen if we won’t be able to rely on what we see in the CCTV footage? For example in a recent criminal case of the murder of an aspiring air hostess, the accused was identified as committing the crime in the CCTV footage. This shows the heavy dependency of the crime scene investigation on electronic records.

AI tools like Midjourney, and Ideogram can create seemingly real images with the given prompts. It is not too far when crooks would be using AI to generate fake CCTV footage to create fake evidence[5]. Voice conversion tools can clone anyone’s voice. Voice conversion [6]tools have advanced to a point where they can replicate the voices of individuals, enabling us to hear our preferred singers perform songs with customized lyrics of our choice. Imagine listening to A.R. Rahman singing K-pop. The generative artificial intelligence tools are creating imaginary photos, now even videos. However, it is crucial to consider the ethical and legal ramifications of this innovation. It is important to approach voice cloning responsibly and ethically, refraining from using it to deceive or manipulate others. Misusing this technology to create videos or recordings that contradict the values or intentions of public figures can have serious consequences. This is an alarming issue that demands prompt consideration. We have two important questions that need to be addressed concerning the relevancy of electronic evidence in the AI-generative era:-

  • How the courts of law will test the credibility and reliability of the presented evidence i.e. distinguishing between the original and AI-generated evidence.
  • What criteria determine the admissibility of AI-generated evidence?

POSSIBLE THREATS

  • AI-generated evidence: Relevant evidence that can be demonstrated to be unreliable to such an extent that allowing it to be considered in the case would produce a risk of an unjust verdict.
  • Defamation: The creation of defamatory voice or video recordings that convincingly mimic someone but were never actually spoken by that person can potentially lead to criminal charges.[7] The recordings can be presented as evidence in the courts of law.
  • Deepfake cases: Deepfakes take their name from the fact that they use deep learning technology to create fake videos. Deep learning technology is a kind of machine learning that applies neural net simulation to massive data sets. Artificial intelligence (AI) effectively learns what a particular face looks like at different angles to transpose the face onto a target as if it were a mask.[8]

Deepfakes essentially mean to temper with the original recordings to mislead. This can have serious legal consequences. For instance, a clone video recording of a public personality saying something contrary to his/her social values.

  • Telecom scams: Telephone scammers are meticulously using voice-imitating AI tools to trick people into sending money. According to the survey conducted by McAfee, 1 in 4 adults has witnessed such scams. More than 53% of the adults share their voice on social media.[9]
  • Privacy: Creating false content disturbs the right to privacy guaranteed under Article 21 of the Constitution of India. For example, using someone’s voice hampers his/her privacy.
  • Lack of verification mechanism: We haven’t come up with a comprehensive falsehood detector. A comprehensive falsehood detector would utilize advanced machine learning techniques and natural language processing to analyze various forms of information, such as written text, audio, and video, to identify false or misleading content.

CONCLUSION

In this AI-generative era, legal systems must adapt and evolve to address these challenges and establish clear guidelines for handling AI-generated evidence while preserving the integrity of the justice system. Legal advancements take time but technology is driven by curiosity hence wait for none. A standard procedure needs to be developed so as to check the explainability and trustworthiness of electronic evidence. As Dr Maura Grossman, a University of Waterloo expert in ethics, law and technology stated “Let’s find something in between outright banning and doing nothing, and maybe it is some agency for algorithms that has to approve them.”[10]We need qualified digital evidence professionals and forensic experts to advise lawyers and judges on the soundness of electronic evidence. Additionally, international cooperation and standardized approaches may be necessary to tackle these issues on a global scale.

Author(s) Name: Aashi Gupta (Devi Ahilya Vishwavidyalaya, Indore, Madhya Pradesh)

References:

[1] The Indian Evidence Act 1872, s 3

[2] The Endian Evidence Act 1872, s 65B

[3] Vijay pal dalmia, “india: admissibility and evidentiary value of electronic records”(mondag.com, 12 july 2019)https://www.mondaq.com/india/court-procedure/824974/admissibility–evidentiary-value-of-electronic-records  accessed on 30 September 2023

[4] Arundhati Roy”examiner of electronic evidence”(ipleaders,13 June 2021) https://blog.ipleaders.in/examiner-electronic-evidence/#  accessed on 8 October 2023

[5] Stuart A. Thompson and tiffany hsu,”how easy it is to fool A.I.-detection tools”(the new york times,28 june 2023) https://www.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html accessed on 30 september 2023

[6] Kim Martin”What is voice cloning”(ID R&D) https://www.idrnd.ai/what-is-voice-cloning/ accessed on 8 October 2023

[7] Rashmi senthilkumar,”defamation law in india”(legal services india) https://www.legalserviceindia.com/legal/article-2224-defamation-law-in-india.html accessed on 30 September 2023

[8] Joseph Foley, “20 of the best deepfake examples that terrified and amused the internet” (creative blog, 1 march 2023)https://www.creativebloq.com/features/deepfake-examplesaccessed11 September 2023 accessed on 11 September 2023

[9] McAfee Corp,  ”Artificial intelligence voice scams on the rise with 1 in 4 adults impacted” (MacAfee,2 May 2023)<https://www.mcafee.com/ko-kr/consumer-corporate/newsroom/press-releases/press-release.html?news_id=5aa0d525-c1ae-4b1e-a1b7-dc499410c5a1&langid=48> accessed on 11 September 2023

[10] Terry Pender” A.I. threatens courts with fake evidence UW proof says”(JD supra, 29 March 2023) https://www.jdsupra.com/legalnews/ai-threatens-courts-with-fake-evidence-7371356/ accessed on 8 October 2023

logo juscorpus wo
Submit your post here:
thejuscorpus@gmail(dot)com
Ads/campaign query:
Phone: +91 950 678 8976
Email: support@juscorpus(dot)com
Working Hours:

Mon-Fri: 10:00 – 17:30 Hrs

Latest posts
Newsletter

Subscribe newsletter to stay up to date about latest opportunities and news.