摘要
为了借助人工智能来应对当代社会的道德滞后问题,跨学科、跨文化的大量研究都在关注对人工道德智能体的设计、实现和发展。然而,一个更为重要的元问题应该是:人们究竟是否希望由人工智能做道德决策?来自道德心理学家和神经科学家的一些实证研究表明,答案是否定的,并且该厌恶感的来源可能在于人们认为人工智能缺乏完全的、完整的心智。而要想有效提高人们对人工智能做道德决策的接受度,一方面应将道德决策主体的合法范围延伸至人工智能,另一方面则应进一步对人工智能进行拟人化,提高人工智能被感知到的体验性(热情)和能动性(能力、专长性),以增强人类与人工智能之间的共情和信任,同时还应特别注意利用伊丽莎效应并避免恐怖谷效应。
In order to deal with the moral lag problem in the contemporary society with the help of AI,a large number of interdisciplinary and intercultural researchers are paying close attention to the design,implementation and development of artificial moral agents.However,a more important meta-question should be:Do people really want machine/robot/AI/computer to make moral decisions?Some empirical studies from moral psychologists and neuroscientists suggest that the answer may be"No",and this aversion is mediated by the perception that artificial intelligence lacks a complete mind.This paper argues that in order to effectively improve people’s acceptance of AI making moral decisions,on the one hand,the legitimate circle of moral decision makers should be extended to artificial intelligence;on the other hand,we should further anthropomorphize AI and increase its perceived experience(warmth)and agency(competence,expertise),in order to enhance the mutual empathy and trust between humans and AI,while at the same time we should pay special attention to the utilization of the"Eliza Effect"and to the avoidance the"Uncanny Valley Effect".
作者
丁晓军
喻丰
许丽颖
DING Xiaojun;YU Feng;XU Liying(School of Humanities and Social Sciences,Xi'an Jiaotong University,Xi^n,Shaanxi,710049;School of Philosophy,Wuhan University,Wuhan,Hubei,430072;School of Social Sciences,Tsinghua University,Beijing,100875)
出处
《自然辩证法通讯》
CSSCI
北大核心
2020年第12期80-86,共7页
Journal of Dialectics of Nature
基金
教育部人文社会科学研究青年基金项目“理性行动的认知规范研究”(项目编号:19YJC720006)
国家社科基金青年项目“拟人化人工智能的道德责任归因研究”(项目编号:20CZX059)。
关键词
人工道德智能体
道德决策
道德滞后问题
心智感知
拟人化
Artificial moral agent
Moral decision
Moral lag problem
Mind perception
Anthropomorphism