Weapons of mass disruption: artificial intelligence and international law

被引:0
|
作者
Chesterman, Simon [1 ,2 ]
机构
[1] Natl Univ Singapore, Fac Law, Singapore, Singapore
[2] AI Singapore, AI Governance, Singapore, Singapore
关键词
artificial intelligence; cybersecurity; regulation; International Atomic Energy Agency; weaponisation; lethal autonomous weapon systems; ombudspersons; REGULATORY CAPTURE; ICANN; TECHNOLOGY; GOVERNANCE;
D O I
暂无
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
The answers each political community finds to the law-reform questions posed by artificial intelligence (AI) may differ, but a near-term threat is that AI systems capable of causing harm will not be confined to one jurisdiction; indeed, it may be impossible to link them to a specific jurisdiction at all. This is not a new problem in cybersecurity, though different national approaches to regulation will pose barriers to effective regulation exacerbated by the speed, autonomy and opacity of AI systems. For that reason, some measure of collective action is needed. Lessons may be learned from efforts to regulate the global commons, as well as moves to outlaw certain products (weapons and drugs, for example) and activities (such as slavery and child sex tourism). The argument advanced here is that regulation, in the sense of public control, requires the active involvement of States. To coordinate those activities and enforce global 'red lines', this paper posits a hypothetical International Artificial Intelligence Agency, modelled on the agency created after the Second World War to promote peaceful uses of nuclear energy, while deterring or containing its weaponisation and other harmful effects.
引用
收藏
页码:181 / 203
页数:23
相关论文
共 50 条