摘要
随着人工智能教育产品日渐普及,教育领域所面临的伦理挑战不容忽视。欧盟于2019年4月出台的《可信赖的人工智能伦理准则》阐释了可信赖的人工智能伦理框架及建议的实现路径。基于此文件提出的伦理框架维度,结合AI教育应用已出现的伦理争议案例及相关研究发现,本文从"人的能动性与监督""技术稳健性与安全性""隐私与数据管理""社会与环境福祉""多样性、非歧视性与公平性""透明性"和"问责制度"这七方面分析了面对AI教育应用伦理困境时可采用的防范对策,如伦理嵌入设计、包容性测试等技术手段和建立多层责任分配制度、完善认证系统等非技术手段,为智能教育产品从开发、部署到使用全过程做到"可信赖"提供了参考。
With the increasing popularity of educational AI products,their ethical challenges in education should not be ignored.The Ethics Guidelines for Trustworthy AI issued in April 2019 explained the ethical framework and path to the realization of a trustworthy AI.Based on the ethical dispute cases and related research findings,this paper projects the interpretation of EU's ethical framework into the education sector from seven aspects:human agency and oversight,technical robustness and safety,privacy and data governance,societal and environmental wellbeing,diversity,non-discrimination and fairness,transparency,and accountability.It demonstrates practical countermeasures such as embedding human ethic in design,inclusive test,standardization,and accreditation that can be adopted by stakeholders in response to ethical dilemmas while introducing AI products to school campuses.
作者
沈苑
汪琼
SHEN Yuan;WANG Qiong
出处
《北京大学教育评论》
CSSCI
北大核心
2019年第4期18-34,184,共18页
Peking University Education Review
基金
认知智能国家重点实验室资助项目(iED2019-Z06)