期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Pedagogical Alignment of Large Language Models (LLM) for Personalized Learning: A Survey, Trends and Challenges
1
作者 Mahefa Abel Razafinirina William Germain Dimbisoa Thomas Mahatody 《Journal of Intelligent Learning Systems and Applications》 2024年第4期448-480,共33页
This survey paper investigates how personalized learning offered by Large Language Models (LLMs) could transform educational experiences. We explore Knowledge Editing Techniques (KME), which guarantee that LLMs mainta... This survey paper investigates how personalized learning offered by Large Language Models (LLMs) could transform educational experiences. We explore Knowledge Editing Techniques (KME), which guarantee that LLMs maintain current knowledge and are essential for providing accurate and up-to-date information. The datasets analyzed in this article are intended to evaluate LLM performance on educational tasks, such as error correction and question answering. We acknowledge the limitations of LLMs while highlighting their fundamental educational capabilities in writing, math, programming, and reasoning. We also explore two promising system architectures: a Mixture-of-Experts (MoE) framework and a unified LLM approach, for LLM-based education. The MoE approach makes use of specialized LLMs under the direction of a central controller for various subjects. We also discuss the use of LLMs for individualized feedback and their possibility in content creation, including the creation of videos, quizzes, and plans. In our final section, we discuss the difficulties and potential solutions for incorporating LLMs into educational systems, highlighting the importance of factual accuracy, reducing bias, and fostering critical thinking abilities. The purpose of this survey is to show the promise of LLMs as well as the issues that still need to be resolved in order to facilitate their responsible and successful integration into the educational ecosystem. 展开更多
关键词 Chain of Thought Education IA LLM Machine Learning NLP Personalized Learning prompt optimization Video Generation
下载PDF
基于提示学习增强BERT的理解能力
2
作者 陈亚当 杨刚 +1 位作者 王铎霖 余文斌 《信息技术》 2024年第6期87-93,共7页
提示学习旨在利用提示模板减小语言模型的预训练任务和下游任务间的差距。其难点在于提示模板的设计,为此,文中在构造提示模板的过程中,提出一个通过自动搜索离散提示对连续提示优化的新方法。其中,自动搜索提示基于双向Transformer编码... 提示学习旨在利用提示模板减小语言模型的预训练任务和下游任务间的差距。其难点在于提示模板的设计,为此,文中在构造提示模板的过程中,提出一个通过自动搜索离散提示对连续提示优化的新方法。其中,自动搜索提示基于双向Transformer编码器(Bidirectional Encoder Representation from Transformers, BERT)的预训练任务掩码语言模型训练,连续提示优化是训练自动搜索输出的离散提示在连续空间内的映射张量,根据损失函数对提示模板进行训练。实验表明,在公共基准SuperGLUE中,基于提示学习的BERT相比于原始的BERT模型在准确率和F1值上均有显著的提升。 展开更多
关键词 提示学习 双向Transformer编码器 自然语言处理 连续提示优化 掩码语言模型
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部