期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
基于教师权益的自主人工智能应用——对联合国教科文组织《教师人工智能能力框架》的解读
1
作者 苗逢春 《开放教育研究》 北大核心 2024年第5期4-16,共13页
本文是对联合国教科文组织发布的《教师人工智能能力框架》的解读。人工智能在逐步以拟人智能体的技术形态助推“教师、学生和人工智能”三角教学互动关系的构建,并在人机互动中挑战教师的能动性,这凸显了教师理解和秉承以人为本的人工... 本文是对联合国教科文组织发布的《教师人工智能能力框架》的解读。人工智能在逐步以拟人智能体的技术形态助推“教师、学生和人工智能”三角教学互动关系的构建,并在人机互动中挑战教师的能动性,这凸显了教师理解和秉承以人为本的人工智能观念的必要性。目前通用的猎取性挖掘人类数据训练人工智能模型的智能技术生产关系范式加速了人工智能伦理问题的裂变及对智能时代社会关系的冲击和重构,人工智能伦理以及人工智能时代的社会责任已成为教师的必学领域。作为当今社会无所不在的一类基础性通用技术,人工智能在颠覆生产流程、社会实践和生活方式等方面的技术变革潜力已经彰显,教师理解人工智能基本工作原理、批判性评判其教学育人的适用性并探索人工智能支持教学和专业发展有效方式等方面的能力发展需求日益迫切。鉴于此,各国政府和教育机构应以基于教师教育教学权益的自主应用为宗旨,从以人为本的人工智能观念、人工智能伦理、人工智能基础知识和应用技能、人工智能与教学法整合、人工智能支持教师专业发展等层面界定教师的人工智能能力框架,以获取、深化、创造等能力水平为基准规划培训课程的目标、内容与方法,协助教师提升负责任地、有效和创新地应用人工智能所需的能力。 展开更多
关键词 教师人工智能能力 教师、学生、人工智能三角教学关系 以人为本的人工智能观念 人工智能伦理 人工智能基础知识和应用技能 人工智能与教学法整合 人工智能支持教师专业学习
下载PDF
教育人工智能伦理的解析与治理--《人工智能伦理问题建议书》的教育解读 被引量:36
2
作者 苗逢春 《中国电化教育》 CSSCI 北大核心 2022年第6期22-36,共15页
该研究力图揭示人工智能伦理治理需应对的私有数字治理体把控的人工智能伦理霸权、全球人工智能治理失序和主权国家人工智能治理滞后的现状。从机器决策与人文实践互动的维度界定人工智能伦理问题分析框架,剖析基于数据和算法的预测和... 该研究力图揭示人工智能伦理治理需应对的私有数字治理体把控的人工智能伦理霸权、全球人工智能治理失序和主权国家人工智能治理滞后的现状。从机器决策与人文实践互动的维度界定人工智能伦理问题分析框架,剖析基于数据和算法的预测和决策引发的典型伦理问题。在此框架下,辨析教育人工智能伦理问题的主要表现形式和教育作为培养人工智能伦理价值观和伦理行事能力的主渠道作用,探索教育人工智能伦理治理与教育智能升级统一与同步的制度再建空间和实践方向。 展开更多
关键词 数字私有治理体 人工智能伦理问题分析框架 教育人工智能伦理 生物圈整体视域中的价值观 公平和非歧视 伦理影响评估框架 数据政策
下载PDF
Navigating AI Cybersecurity: Evolving Landscape and Challenges
3
作者 Maryam Roshanaei Mahir R. Khan Natalie N. Sylvester 《Journal of Intelligent Learning Systems and Applications》 2024年第3期155-174,共20页
The rapid integration of artificial intelligence (AI) into critical sectors has revealed a complex landscape of cybersecurity challenges that are unique to these advanced technologies. AI systems, with their extensive... The rapid integration of artificial intelligence (AI) into critical sectors has revealed a complex landscape of cybersecurity challenges that are unique to these advanced technologies. AI systems, with their extensive data dependencies and algorithmic complexities, are susceptible to a broad spectrum of cyber threats that can undermine their functionality and compromise their integrity. This paper provides a detailed analysis of these threats, which include data poisoning, adversarial attacks, and systemic vulnerabilities that arise from the AI’s operational and infrastructural frameworks. This paper critically examines the effectiveness of existing defensive mechanisms, such as adversarial training and threat modeling, that aim to fortify AI systems against such vulnerabilities. In response to the limitations of current approaches, this paper explores a comprehensive framework for the design and implementation of robust AI systems. This framework emphasizes the development of dynamic, adaptive security measures that can evolve in response to new and emerging cyber threats, thereby enhancing the resilience of AI systems. Furthermore, the paper addresses the ethical dimensions of AI cybersecurity, highlighting the need for strategies that not only protect systems but also preserve user privacy and ensure fairness across all operations. In addition to current strategies and ethical concerns, this paper explores future directions in AI cybersecurity. 展开更多
关键词 ai Cybersecurity Adversarial Attacks Defensive Strategies ethical ai
下载PDF
Tackling the Existential Threats from Quantum Computers and AI
4
作者 Fazal Raheman 《Intelligent Information Management》 2024年第3期121-146,共26页
Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousa... Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025. 展开更多
关键词 ethical ai Quantum Computers Existential Threat Computer Vulnerabilities Halting Problem AGI
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部