摘要
世界各地的军队都相信,将机器学习方法整合到部队中可以提高他们的战斗力。从辅助招募和晋升的算法,到用于监视和预警的算法,再到直接在战场上使用的算法,人工智能(AI)的应用都可以塑造战争的未来特征。这些用途还可能对国际稳定产生重大风险。这些风险涉及可能影响战争的人工智能的广泛方面,可能增加导致无意冲突风险的机器学习方法的局限性,以及使用人工智能可能存在危险的特定任务领域,如核行动。为了减少这些风险并促进国际稳定,探讨了建立信任措施(Confidence-Building Measures,CBMs)的潜在用途,这些措施是建立在各国为防止无意战争而享有的共同利益的基础上的。尽管CBMs不是万能的,但它可以为信息共享和有关AI功能系统的通知创建标准,从而减少偶然冲突的可能性。
Militaries around the world believe that the integration of machine learning methods throughout their forces could improve their effectiveness.From algorithms to aid in recruiting and promotion,to those designed for surveillance and early warning,to those used directly on the battlefield,applications of artificial intelligence(AI)could shape the future character of warfare.These uses could also generate significant risks for international stability.These risks relate to broad facets of AI that could shape warfare,limits to machine learning methods that could increase the risks of inadvertent conflict,and specific mission areas,such as nuclear operations,where the use of AI could be dangerous.To reduce these risks and promote international stability,we explore the potential use of confidence-building measures(CBMs),constructed around the shared interests that all countries have in preventing inadvertent war.Though not a panacea,CBMs could create standards for information-sharing and notifications about AI enabled systems that make inadvertent conflict less likely.
作者
迈克尔·霍洛维兹
保罗·沙尔
编辑部
Michael C.Horowitz;Paul Scharre(Center for a New American Security,Washington DC 20005,USA)
出处
《信息安全与通信保密》
2021年第5期49-58,共10页
Information Security and Communications Privacy