The Use of Artificial Intelligence in Armed Conflict under International Law
Abstract
Artificial Intelligence (AI) is a technological achievement that simulates human intelligence through machines or computer programs. The integration of AI in military operations aims to minimize combatant casualties and enhance effectiveness in warfare. Despite the advantages and significance of this research, concerns arise regarding the ideal implementation of AI in armed conflicts due to potential security challenges. A significant issue lies in the legal perspective governing AI as a comprehensive defense tool. This paper employs a juridical normative research method based on a statutory approach to provide a descriptive analysis and examine the regulatory framework surrounding AI in armed conflict. The results indicate that the absence of comprehensive regulations complicates the accountability framework, making liability determination intricate, particularly when AI malfunctions due to substandard quality or improper use. In such cases, accountability may extend to both the creator and the user. The concept of liability for violations in armed conflict is explored according to international law, highlighting the implications and associated responsibilities of using AI within legal principles. This paper concludes that AI regulation must be crafted to ensure usage aligns with established procedures within the framework of international law.
Keywords
Artificial Intelligence; Armed Conflict; Drone; International Law; Military Operations; Security
Full Text:
PDFDOI: http://dx.doi.org/10.20956/halrev.v10i2.5267
Refbacks
- There are currently no refbacks.
Hasanuddin Law Review (ISSN Online: 2442-9899 | ISSN Print: 2442-9880) is licensed under a Creative Commons Attribution 4.0 International License. Preserved in LOCKSS, based at Stanford University Libraries, United Kingdom, through PKP Private LOCKSS Network program.
Indexing and Abstracting: