INTRODUCTION
Artificial intelligence (AI) has advanced rapidly from a futuristic concept to a vital aspect of contemporary living. It has completely changed the way we live and work, impacting everything from small jobs to essential procedures. Defence, assault, and observational AI are being used on the battlefield by military divisions across the globe as automated technology becomes more and more integrated into our daily lives. The predictable and potentially revolutionary effects of such automated technologies in the domains of global politics, military might, and strategic competition have sparked a debate. AI was first used by defence organizations all over the world for tasks including target detection, drone swarms, data processing, and transportation long before it became popular in the commercial sector. Armed forces throughout the world have started investigating their applications of artificial intelligence (AI) after realizing the potential of this emerging modelling technology. With the development of technology like electric combat vehicles, drones, ground robots, and sensors to support soldiers in combat, artificial intelligence (AI) is finding more and more uses in military software and applications. These technologies are made to handle massive volumes of data fast and effectively, giving soldiers access to real-time information they need to make prompt, well-informed decisions. Advances in artificial intelligence (AI) have affected many aspects of a person’s everyday life, including weapon design and unconventional warfare.[1] AI is being utilized in the military for many different things, such as decision support for aiming and robotics control software. The number of applications on this list is continually growing.
Decisions about who to murder, injure, destroy, or damage during a battle could be drastically changed if artificial intelligence (AI) was used to do tasks that have historically been completed by humans. The main difficulty is that there is no human oversight over these decisions, which leads to unpredictable outcomes and raises several ethical and legal concerns. But applying AI to warfare would mean a huge reduction in the amount of human intervention in military drills, which would undoubtedly lead to war crimes, and it would also erode the defences against the spread of conventional nuclear conflicts. This change has far-reaching consequences that affect international relations and the dynamics of global security in addition to national boundaries.
OPTICS AND ADVERSITIES OF AI IN THE MILITARY
AI has a lot of promise to improve training simulations like war gaming. It can improve simulations’ realism, flexibility, and complexity by enabling the creation of multidimensional and multipolar scenarios. The ability of AI to facilitate communication between humans and machines is one of its biggest uses. Technologists worldwide have been working on creating sophisticated autonomous agents like robots and tools that allow people to communicate with the technologies they use. The MiG-35 fighter jet, which Russia introduced in 2019, has an embedded artificial intelligence (AI) pilot assistant named Rita. Rita provides the pilot with recommendations during combat and in crucial flight scenarios. [2]Twin-pilot craft could become unnecessary with the use of this technology, significantly boosting efficiency.
Improved decision-making, decreased labour costs, and soldier safety are all possible with AI integrated into cutting-edge military software and technology. Non-human agents can be tasked with hazardous activities by humans to keep themselves safe. Troops in conflict may be harmed by explosives, for instance, but tactical ground robots are capable of handling both. Robotic dogs can be used as an additional hand to carry or retrieve needed equipment at the same moment. Because robots can accomplish these activities without endangering soldiers, using them can save countless lives[3]. AI Autonomous vehicles and drones that can function without direct human involvement are made possible by artificial intelligence. These systems can carry out operations involving reconnaissance, surveillance, and even battle, lowering the risk to human soldiers and boosting operational effectiveness. By identifying and addressing threats faster than using conventional techniques, AI can strengthen cyber protection systems. In contemporary warfare, it can also be employed offensively to attack opponents’ digital infrastructure and gain a tactical edge.
More precise data analysis can aid in more effective opponent targeting. We can make safer, more informed decisions in conflict using automated technologies, making almost no mistakes. Artificial Intelligence (AI) and drones are used to recognize and convey potential dangers and threats. They have an edge while planning an attack since they can distinguish farther away things with greater accuracy. Due to its unparalleled integration, which enables it to gather information from various sources, assess the situation comprehensively, and then apply force, a modern AI would not act arbitrarily or base its decisions on unreliable sources. This makes the process faster than human capability. To ensure that these robots behave morally throughout missions, it is advised that in-depth research be done and that a compromise be made on a single ethical paradigm to be applied to these machines.[4] This can be difficult, though, given the range of morality and ethics theories as well as the diversity of cultural perspectives on ethics.
The Additional Protocol’s Article 36 I stipulate that nearly all weapons that have been developed recently must be examined to ensure that they do not violate international law. In terms of accountability and attribution, intelligent machines pose serious challenges to international law. It is debatable whether or not machines devoid of human judgment and reason will be permitted to operate under this theory. [5]According to the fundamental principle of proportionality, the loss of innocent lives and the suffering of innocent people’s concussions might not even be disproportionate to the much-desired military strategic dominance. To ascertain whether completely autonomous firearms could comply with all of this, researchers, statisticians, and international legal experts must engage in a thorough explanation. In addition, there are serious concerns about the possibility that terrorist organizations, extreme groups, and rogue states will covertly employ this new technology to imperil people. The development and dissemination of an autonomous weapon system are not expressly regulated by any formal international platform.
Because there are hazards and perils associated with putting your reliance on technology, some people view AI as a disadvantage or setback. With AI, security risks are a big worry. AI is susceptible to hacking or manipulation by hostile parties. If these systems are compromised, private data may be taken and used against the other team. [6]The unpredictable nature of AI is another problem that arises from its deployment. It might be challenging to assign blame when something goes wrong. As AI systems develop and can perform activities on their own, this worry becomes more pressing. Innocent bystanders might suffer injury from AI malfunctions, and it’s not clear who would be held responsible. Giving computers a consciousness of their own could also have unintended repercussions, therefore we need to proceed cautiously with the development and application of these technologies.
CONCLUSION
Certain individuals contend that autonomous weapons or robots can perform tasks more effectively, efficiently, and compliantly than people, notwithstanding disagreements on the militarization of artificial intelligence technology. Unlike humans, they are more exact because they are not driven by the concept of self-preservation. In addition, these bots lack any real impulses like fear, aggression, paranoia, hysteria, or prejudice, which can cloud human judgment and cause unintentional or inappropriate actions. Additionally, these bots are more vulnerable to cognitive fallacies, in which judgments are readily selectively gathered based on preconceived assumptions, producing poor decisions and increasing collateral harm.
Natural language processing and machine learning converge to become artificial intelligence. The technology’s growing capacity to analyse unstructured data and generate innovative results makes it a potent tool. Although it will first serve to improve human abilities, it also has a wide range of uses in fields like data analysis where it can outperform human cognitive abilities. The use of AI in military operations has important ramifications for security, deterrence, and intelligence. At this point, AI cannot be trusted to make decisions on its own, nor should it ever be allowed to. [7]Moral concerns and safeguarding measures must keep up with developments in technology to minimize the undesirable effects resulting from incorrect alignment, especially as we move towards a third wave that may involve underlying models for reasoning and enhanced ambiguity processing. Both the benefits and risks associated with AI are expected to rise.
Author(s) Name: Palisetti Sanjana (Alliance University, Bangalore)
Reference(s):
[1] Rajeswari Pillai Rajagopalan and Sameer Patil, future warfare and critical technologies: evolving tactics and strategies (New Delhi: ORF and global policy journal, 2024)
[2] Ibid
[3] The pros and cons of using AI in military divisions worldwide (PEI- Genesis) <https://search.app/2oSxnyVu1UWTyBT7A> accessed 17 July 2024
[4] Jatin karela and Yaksh Bhakhand, Potential Warfare: The Optics and Adversities of AI in the Military on international plane (2024) Vol 2, 2582-7340, 3
[5] Ibid
[6] The pros and cons of using AI in military divisions worldwide (PEI- Genesis) <https://search.app/2oSxnyVu1UWTyBT7A> accessed 17 July 2024
[7] Rajeswari Pillai Rajagopalan and Sameer Patil, future warfare and critical technologies : evolving tactics and strategies (New Delhi: ORF and global policy journal, 2024)