Legal evaluation of the attacks caused by artificial intelligence-based lethal weapon systems within the context of Rome statute

被引:3
|
作者
Sari, Onur [1 ]
Celik, Sener [2 ]
机构
[1] Istanbul Kent Univ, Onur Sari Dept Law, Cihangir Mah Siraselviler Cad 71, TR-34433 Istanbul, Turkey
[2] Heritage Istanbul, Heritage Akad, Istanbul, Turkey
关键词
AI; Autonomy; Crime of aggression; Rome statute; IP law; IT law; International law;
D O I
10.1016/j.clsr.2021.105564
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
Artificial intelligence (AI) as of the level of development reached today has become a scientific reality that is subject to study in the fields of law, political science, and other social sciences besides computer and software engineering. AI systems which perform relatively simple tasks in the early stages of the development period are expected to become fully or largely autonomous in the near future. Thanks to this, AI which includes the concepts of machine learning, deep learning, and autonomy, has begun to play an important role in producing and using smart arms. However, questions about AI-Based Lethal Weapon Systems (AILWS) and attacks that can be carried out by such systems have not been fully answered under legal aspect. More particularly, it is a controversial issue who will be responsible for the actions that an AILWS has committed. In this article, we discussed whether AILWS can commit offense in the context of the Rome Statute, examined the applicable law regarding the responsibility of AILWS, and tried to assess whether these systems can be held responsible in the context of international law, crime of aggression, and individual responsibility. It is our finding that international legal rules including the Rome Statute can be applied regarding the responsibility for the act/crime of aggression caused by AILWS. However, no matter how advanced the cognitive capacity of an AI software, it will not be possible to resort to the personal responsibility of this kind of system since it has no legal personality at all. In such a case, responsibility will remain with the actors who design, produce, and use the system. Last but not least, since no AILWS software does have specific codes of conduct that can make legal and ethical reasonings for today, at the end of the study it was recommended that states and non-governmental organizations together with manifacturers should constitute the necessary ethical rules written in software programs to prevent these systems from unlawful acts and to develop mechanisms that would restrain AI from working outside human control.
引用
收藏
页数:16
相关论文
共 2 条
  • [1] The need to strengthen the evaluation of the impact of Artificial Intelligence-based decision support systems on healthcare provision
    Cresswell, Kathrin
    Rigby, Michael
    Magrabi, Farah
    Scott, Philip
    Brender, Jytte
    Craven, Catherine K.
    Wong, Zoie Shui-Yee
    Kukhareva, Polina
    Ammenwerth, Elske
    Georgiou, Andrew
    Medlock, Stephanie
    Keizer, Nicolette F. De
    Nykanen, Pirkko
    Prgomet, Mirela
    Williams, Robin
    HEALTH POLICY, 2023, 136
  • [2] Acceptance, Barriers, and Facilitators to Implementing Artificial Intelligence-Based Decision Support Systems in Emergency Departments: Quantitative and Qualitative Evaluation
    Fujimori, Ryo
    Liu, Keibun
    Soeno, Shoko
    Naraba, Hiromu
    Ogura, Kentaro
    Hara, Konan
    Sonoo, Tomohiro
    Ogura, Takayuki
    Nakamura, Kensuke
    Goto, Tadahiro
    JMIR FORMATIVE RESEARCH, 2022, 6 (06)