摘要
在人工智能时代致命性自主武器对国际人道法业已确立的区分原则、比例原则、预防原则和责任原则提出了严峻的挑战,其使用可能因犯战争罪或反人道罪而受到惩处。与国际法的制裁性相比,国际法的预防性对人类从根本上免遭致命性自主武器的杀戮更为重要。中国在安理会常任理事国中首倡在联合国《特定常规武器公约》框架下缔结具有法律拘束力的国际协议来规制致命性自主武器,欧盟拟通过制定《机器人民事法律规则》和《可信任人工智能道德准则》来避免自主武器对人类的伤害。通过在联合国《特定常规武器公约》框架下缔结有关议定书来规制致命性自主武器时,应对自主武器做出定义,并应将自主武器区分为致命性自主武器和非致命性自主武器。对致命性自主武器应从国际法上否定其合法性,并明确加以禁止。对非致命性自主武器应明确加以限制,并规范其在战争或武装冲突中的合理使用。
In the era of artificial intelligence,lethal autonomous weapons pose a serious challenge to the established principles of distinction,proportionality,prevention and responsibility in international humanitarian law.Their use may be punished for war crimes or crimes against humanity.Compared with the sanctions of international law,the preventive nature of international law is more important for human beings to be free from the killing of lethal autonomous weapons.China has initiated a legally binding international agreement to regulate lethal autonomous weapons within the framework of the UN Convention on Conventional Weapons.The EU intends to adopt the Civil Law Rules and the Code of Trusted Artificial Intelligence to avoid the harm of autonomous weapons to human beings.When a lethal autonomous weapon is regulated by concluding a protocol under the UN Convention on Conventional Weapons,the definition of autonomous weapons should be defined and the autonomous weapons should be classified as lethal autonomous weapons and non-lethal autonomous weapons.The lethal autonomous weapons should be denied their legitimacy from international law and explicitly prohibited.Non-lethal autonomous weapons should be clearly restricted and regulated for their reasonable use in war or armed conflict.
作者
杨成铭
魏庆
Yang Chengming;Wei Qing
出处
《政法论坛》
CSSCI
北大核心
2020年第4期133-143,共11页
Tribune of Political Science and Law
关键词
致命性自主武器
规制
国际法
Lethal Autonomous Weapon
Regime
International Law