摘要
我国《人工智能法》的制定程序应当与《产品质量法》的修订程序同步启动,以确保人工智能产品安全与产品责任的规则相协调。有形性并非判断人工智能产品的标准,人工智能输出信息具有成为产品的可能,而人工智能产品与服务的区分则需要进行类型化判定。在产品缺陷的判断上,技术性标准的制定应当特别考量人工智能产品的新安全需求,对不合理危险的认定则需要综合产品说明、其他产品的影响、自主学习、升级更新等主客观因素。因果关系的认定需要区分辅助型与替代型人工智能,合理判断人工智能自主行为与使用人行为是否构成替代原因,同时引入因果关系推定规则。在免责事由方面,后期缺陷抗辩规则的适用需要考虑人工智能产品的升级更新、自主学习的特点,对发展风险抗辩规则的适用宜通过区分不同风险的人工智能、设置跟踪观察义务等方式进行限制,但没有必要创设单独的开源抗辩规则。
The formulation procedure of the Chinese AI Law should be synchronized with the revision of the Product Quality Law to ensure that the rules of AI product safety and product liability are coordinated.Tangibility is not the criterion to judge artificial intelligence products,AI output information has the possibility of becoming a product,and the distinction between AI products and services needs to be classified.In the judgment of product defects,the formulation of technical standards should particularly consider the new safety needs of AI products,and the identification of unreasonable dangers requires comprehensive product descriptions,the impact of other products,self-learning,upgrading and other subjective and objective factors.The determination of causality needs to distinguish between assisted AI and substitutive AI,reasonably judge whether AI autonomous behavior and user behavior constitute alternative causes,and introduce the rule of presumption of causation.In terms of the reasons for exemption,the application of the later defect defense needs to consider the characteristics of AI product upgrading and self-learning,and the application of the development risk defense should be limited by distinguishing AI with different risks and setting up tracking and observation obligations,and there is no need to create a separate open source defense.
出处
《法律科学(西北政法大学学报)》
CSSCI
北大核心
2024年第4期3-17,共15页
Science of Law:Journal of Northwest University of Political Science and Law
基金
国家社科基金青年项目(20CFX041)“人工智能与《民法典》双重背景下个人信息保护研究”。