期刊文献+
共找到184篇文章
< 1 2 10 >
每页显示 20 50 100
MOOC课程资源访问模式与学习绩效的关系研究 被引量:23
1
作者 张媛媛 李爽 《中国远程教育》 CSSCI 北大核心 2019年第6期22-32,93,共12页
本文基于一门典型MOOC课程10,598名学生学习行为数据,对MOOC课程的资源访问模式及其对学习绩效的影响进行了探索。文章从访问动机、资源类型、行为投入特征三个方面定义了12个在线学习行为变量作为挖掘MOOC学习者资源访问模式的特征变量... 本文基于一门典型MOOC课程10,598名学生学习行为数据,对MOOC课程的资源访问模式及其对学习绩效的影响进行了探索。文章从访问动机、资源类型、行为投入特征三个方面定义了12个在线学习行为变量作为挖掘MOOC学习者资源访问模式的特征变量,12个变量累计解释期末成绩65.7%的变异。研究采用二阶聚类方法基于12个特征变量最终聚类出六种资源访问模式:主动的系统学习型访问模式、全面探索型访问模式、绩效型访问模式、内容型访问模式、随机操练型访问模式和随机浏览型访问模式。研究发现,以目标动机和绩效动机双重驱动的主动的系统学习型访问模式的成绩最好,对绩效类资源的高投入访问将显著影响学习者的成绩。文章最后对MOOC课程的资源访问模式及其有效性、资源访问模式的评测指标、MOOC课程设计与学习支持以及研究本身进行了讨论与反思。期望能够为实践中MOOC课程教学设计与学习支持的改进提供依据。 展开更多
关键词 MOOC 资源访问模式 学习绩效 行为投入 访问动机 资源类型 学习分析 教学设计
原文传递
英国高等教育:优势、挑战与对策 被引量:13
2
作者 潘发勤 《比较教育研究》 CSSCI 北大核心 2004年第2期23-27,共5页
英国高等教育近30年取得了令人瞩目的成就,但也面临着许多来自国内外的挑战。教育与技能部居安思危,提出了促进研究和教学优秀、加强与企业合作、扩展高等教育、公平入学、财政支持等几方面的策略。
关键词 英国高等教育 策略 研究 教学 校企合作 公平入学
下载PDF
虚拟学习社区信任机制的研究 被引量:10
3
作者 王淑娟 刘清堂 《远程教育杂志》 2007年第3期12-15,共4页
虚拟学习社区是e-learning一种重要的应用模式,虚拟学习社区强调学生的主体性和学习活动的自主性,更加强调学习的合作性,社区成员之间共享学习资源,交流学习经验,共同完成学习任务。但是由于虚拟学习社区的虚拟性和社会性,使得成员之间... 虚拟学习社区是e-learning一种重要的应用模式,虚拟学习社区强调学生的主体性和学习活动的自主性,更加强调学习的合作性,社区成员之间共享学习资源,交流学习经验,共同完成学习任务。但是由于虚拟学习社区的虚拟性和社会性,使得成员之间信息共享和交流协作成为一个复杂的问题,而其中一个重要的方面就是社区成员是否信任其他成员提供的知识或者相信其他成员会贡献他们所拥有的知识。本文从虚拟学习社区存在的信任问题出发,通过研究和系统分析,讨论了虚拟学习社区的信任机制的建立。 展开更多
关键词 虚拟学习社区 e—learning 信任 访问控制 知识管理
下载PDF
远程教育和e-learning的挑战:质量、认可度和成效 被引量:12
4
作者 安妮.盖斯凯尔 罗杰.米尔斯 肖俊洪 《中国远程教育》 CSSCI 北大核心 2015年第1期5-15,41,共12页
总的看来,远程教育课程和资格证书的质量名声不佳,因此远程教育机构、学生和员工经常不得不克服这种负面评价。有诸多挑战影响远程开放e-learning声誉,本文分析其中的五种:教学、学习和质量保证过程的质量,学习成效,教学法,学习机会,以... 总的看来,远程教育课程和资格证书的质量名声不佳,因此远程教育机构、学生和员工经常不得不克服这种负面评价。有诸多挑战影响远程开放e-learning声誉,本文分析其中的五种:教学、学习和质量保证过程的质量,学习成效,教学法,学习机会,以及学生、员工和雇主的看法。文章最后对远程开放e-learning目前以及未来的发展进行反思。我们认为很多挑战在大多数背景下已经或能够得以解决;这些挑战也是所有教学模式目前所面临的。应该得到我们更多关注的不是对远程教学课程的投入,而是其成效,即学生在教育、就业和未来生计方面成功达成预期目标并由此而影响雇主和其他人对远程开放e-learning的看法。 展开更多
关键词 远程开放e-learning 质量保证 学与教 机会 大规模在线公开课程(慕课) 对远程学习的看法
原文传递
Energy-Efficient Federated Edge Learning with Joint Communication and Computation Design 被引量:8
5
作者 Xiaopeng Mo Jie Xu 《Journal of Communications and Information Networks》 CSCD 2021年第2期110-124,共15页
This paper studies a federated edge learning system,in which an edge server coordinates a set of edge devices to train a shared machine learning(ML)model based on their locally distributed data samples.During the dist... This paper studies a federated edge learning system,in which an edge server coordinates a set of edge devices to train a shared machine learning(ML)model based on their locally distributed data samples.During the distributed training,we exploit the joint communication and computation design for improving the system energy efficiency,in which both the communication resource allocation for global ML-parameters aggregation and the computation resource allocation for locally updating ML-parameters are jointly optimized.In particular,we consider two transmission protocols for edge devices to upload ML-parameters to edge server,based on the non-orthogonal multiple access(NOMA)and time division multiple access(TDMA),respectively.Under both protocols,we minimize the total energy consumption at all edge devices over a particular finite training duration subject to a given training accuracy,by jointly optimizing the transmission power and rates at edge devices for uploading ML-parameters and their central processing unit(CPU)frequencies for local update.We propose efficient algorithms to solve the formulated energy minimization problems by using the techniques from convex optimization.Numerical results show that as compared to other benchmark schemes,our proposed joint communication and computation design significantly can improve the energy efficiency of the federated edge learning system,by properly balancing the energy tradeoff between communication and computation. 展开更多
关键词 federated edge learning energy efficiency joint communication and computation design resource al location non-orthogonal multiple access(NOMA) optimization.
原文传递
AI/ML Enabled Automation System for Software Defined Disaggregated Open Radio Access Networks:Transforming Telecommunication Business
6
作者 Sunil Kumar 《Big Data Mining and Analytics》 EI CSCD 2024年第2期271-293,共23页
Open Air Interface(OAI)alliance recently introduced a new disaggregated Open Radio Access Networks(O-RAN)framework for next generation telecommunications and networks.This disaggregated architecture is open,automated,... Open Air Interface(OAI)alliance recently introduced a new disaggregated Open Radio Access Networks(O-RAN)framework for next generation telecommunications and networks.This disaggregated architecture is open,automated,software defined,virtual,and supports the latest advanced technologies like Artificial Intelligence(AI)Machine Learning(AI/ML).This novel intelligent architecture enables programmers to design and customize automated applications according to the business needs and to improve quality of service in fifth generation(5G)and Beyond 5G(B5G).Its disaggregated and multivendor nature gives the opportunity to new startups and small vendors to participate and provide cheap hardware software solutions to keep the market competitive.This paper presents the disaggregated and programmable O-RAN architecture focused on automation,AI/ML services,and applications with Flexible Radio access network Intelligent Controller(FRIC).We schematically demonstrate the reinforcement learning,external applications(xApps),and automation steps to implement this disaggregated O-RAN architecture.The idea of this research paper is to implement an AI/ML enabled automation system for software defined disaggregated O-RAN,which monitors,manages,and performs AI/ML-related services,including the model deployment,optimization,inference,and training. 展开更多
关键词 Artificial Intelligence(AI) Reinforcement learning(RL) Open Radio access Networks(O-RAN) Flexible Radio access network Intelligent Controller(FRIC) external Applications(xApps) Machine learning(ML) sixth generation(6G)
原文传递
A Deep Reinforcement Learning-Based Technique for Optimal Power Allocation in Multiple Access Communications
7
作者 Sepehr Soltani Ehsan Ghafourian +2 位作者 Reza Salehi Diego Martín Milad Vahidi 《Intelligent Automation & Soft Computing》 2024年第1期93-108,共16页
Formany years,researchers have explored power allocation(PA)algorithms driven bymodels in wireless networks where multiple-user communications with interference are present.Nowadays,data-driven machine learning method... Formany years,researchers have explored power allocation(PA)algorithms driven bymodels in wireless networks where multiple-user communications with interference are present.Nowadays,data-driven machine learning methods have become quite popular in analyzing wireless communication systems,which among them deep reinforcement learning(DRL)has a significant role in solving optimization issues under certain constraints.To this purpose,in this paper,we investigate the PA problem in a k-user multiple access channels(MAC),where k transmitters(e.g.,mobile users)aim to send an independent message to a common receiver(e.g.,base station)through wireless channels.To this end,we first train the deep Q network(DQN)with a deep Q learning(DQL)algorithm over the simulation environment,utilizing offline learning.Then,the DQN will be used with the real data in the online training method for the PA issue by maximizing the sumrate subjected to the source power.Finally,the simulation results indicate that our proposedDQNmethod provides better performance in terms of the sumrate compared with the available DQL training approaches such as fractional programming(FP)and weighted minimum mean squared error(WMMSE).Additionally,by considering different user densities,we show that our proposed DQN outperforms benchmark algorithms,thereby,a good generalization ability is verified over wireless multi-user communication systems. 展开更多
关键词 Deep reinforcement learning deep Q learning multiple access channel power allocation
下载PDF
Resource Allocation for Cognitive Network Slicing in PD-SCMA System Based on Two-Way Deep Reinforcement Learning
8
作者 Zhang Zhenyu Zhang Yong +1 位作者 Yuan Siyu Cheng Zhenjie 《China Communications》 SCIE CSCD 2024年第6期53-68,共16页
In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Se... In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users. 展开更多
关键词 cognitive radio deep reinforcement learning network slicing power-domain non-orthogonal multiple access resource allocation
下载PDF
Energy-Efficient Traffic Offloading for RSMA-Based Hybrid Satellite Terrestrial Networks with Deep Reinforcement Learning
9
作者 Qingmiao Zhang Lidong Zhu +1 位作者 Yanyan Chen Shan Jiang 《China Communications》 SCIE CSCD 2024年第2期49-58,共10页
As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can p... As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can provide higher efficiency with limited spectrum resources. In this paper, combining spectrum splitting with rate splitting, we propose to allocate resources with traffic offloading in hybrid satellite terrestrial networks. A novel deep reinforcement learning method is adopted to solve this challenging non-convex problem. However, the neverending learning process could prohibit its practical implementation. Therefore, we introduce the switch mechanism to avoid unnecessary learning. Additionally, the QoS constraint in the scheme can rule out unsuccessful transmission. The simulation results validates the energy efficiency performance and the convergence speed of the proposed algorithm. 展开更多
关键词 deep reinforcement learning energy efficiency hybrid satellite terrestrial networks rate splitting multiple access traffic offloading
下载PDF
基于实时视频的工业园区出入控制系统
10
作者 王铁铮 任博瀚 +1 位作者 辛锋 潘焜 《自动化技术与应用》 2024年第5期182-188,共7页
近年来越来越多的产业园部署了现代化监控系统,防止不具有作业资格的人员或未佩戴防护用具的人员进入作业区域,避免生产事故的发生。利用传统的视频监控进行园区出入控制的防控方法存在一定缺陷,需耗费大量的人力进行检查,且无法及时对... 近年来越来越多的产业园部署了现代化监控系统,防止不具有作业资格的人员或未佩戴防护用具的人员进入作业区域,避免生产事故的发生。利用传统的视频监控进行园区出入控制的防控方法存在一定缺陷,需耗费大量的人力进行检查,且无法及时对事故发生进行预警。基于此,研究基于实时视频分析的工业园区作业区域出入控制系统。采用深度学习的算法,对监控摄像头捕获的图片进行分析,对出入人员进行身份核验,并检查其是否正确佩戴防护用具。实验结果表明,该系统可有效进行人员出入控制,提高作业安全性,降低事故发生率,具有一定可扩展性。 展开更多
关键词 实时视频分析 深度学习 目标检测 生产安全 出入控制
下载PDF
基于强化学习的多基站协作接收时隙Aloha网络信道接入机制
11
作者 黄元康 詹文 孙兴华 《物联网学报》 2024年第2期26-35,共10页
随着物联网(IoT, internet of things)基站的部署愈发密集,网络干扰管控的重要性愈发凸显。物联网中,设备常采用随机接入,以分布式的方式接入信道。在海量设备的物联网场景中,节点之间可能会出现严重的干扰,导致网络的吞吐量性能严重下... 随着物联网(IoT, internet of things)基站的部署愈发密集,网络干扰管控的重要性愈发凸显。物联网中,设备常采用随机接入,以分布式的方式接入信道。在海量设备的物联网场景中,节点之间可能会出现严重的干扰,导致网络的吞吐量性能严重下降。为了解决随机接入网络中的干扰管控问题,考虑基于协作接收的多基站时隙Aloha网络,利用强化学习工具,设计自适应传输算法,实现干扰管控,优化网络的吞吐量性能,并提高网络的公平性。首先,设计了基于Q-学习的自适应传输算法,通过仿真验证了该算法面对不同网络流量时均能保障较高的网络吞吐量性能。其次,为了提高网络的公平性,采用惩罚函数法改进自适应传输算法,并通过仿真验证了面向公平性优化后的算法能够大幅提高网络的公平性,并保障网络的吞吐性能。 展开更多
关键词 强化学习 物联网 随机接入 多基站网络 时隙ALOHA
下载PDF
Green Concerns in Federated Learning over 6G 被引量:5
12
作者 Borui Zhao Qimei Cui +5 位作者 Shengyuan Liang Jinli Zhai Yanzhao Hou Xueqing Huang Miao Pan Xiaofeng Tao 《China Communications》 SCIE CSCD 2022年第3期50-69,共20页
As Information,Communications,and Data Technology(ICDT)are deeply integrated,the research of 6G gradually rises.Meanwhile,federated learning(FL)as a distributed artificial intelligence(AI)framework is generally believ... As Information,Communications,and Data Technology(ICDT)are deeply integrated,the research of 6G gradually rises.Meanwhile,federated learning(FL)as a distributed artificial intelligence(AI)framework is generally believed to be the most promising solution to achieve“Native AI”in 6G.While the adoption of energy as a metric in AI and wireless networks is emerging,most studies still focused on obtaining high levels of accuracy,with little consideration on new features of future networks and their possible impact on energy consumption.To address this issue,this article focuses on green concerns in FL over 6G.We first analyze and summarize major energy consumption challenges caused by technical characteristics of FL and the dynamical heterogeneity of 6G networks,and model the energy consumption in FL over 6G from aspects of computation and communication.We classify and summarize the basic ways to reduce energy,and present several feasible green designs for FL-based 6G network architecture from three perspectives.According to the simulation results,we provide a useful guideline to researchers that different schemes should be used to achieve the minimum energy consumption at a reasonable cost of learning accuracy for different network scenarios and service requirements in FL-based 6G network. 展开更多
关键词 6G native AI federated learning radio access network green communications
下载PDF
BLS-identification:A device fingerprint classification mechanism based on broad learning for Internet of Things
13
作者 Yu Zhang Bei Gong Qian Wang 《Digital Communications and Networks》 SCIE CSCD 2024年第3期728-739,共12页
The popularity of the Internet of Things(IoT)has enabled a large number of vulnerable devices to connect to the Internet,bringing huge security risks.As a network-level security authentication method,device fingerprin... The popularity of the Internet of Things(IoT)has enabled a large number of vulnerable devices to connect to the Internet,bringing huge security risks.As a network-level security authentication method,device fingerprint based on machine learning has attracted considerable attention because it can detect vulnerable devices in complex and heterogeneous access phases.However,flexible and diversified IoT devices with limited resources increase dif-ficulty of the device fingerprint authentication method executed in IoT,because it needs to retrain the model network to deal with incremental features or types.To address this problem,a device fingerprinting mechanism based on a Broad Learning System(BLS)is proposed in this paper.The mechanism firstly characterizes IoT devices by traffic analysis based on the identifiable differences of the traffic data of IoT devices,and extracts feature parameters of the traffic packets.A hierarchical hybrid sampling method is designed at the preprocessing phase to improve the imbalanced data distribution and reconstruct the fingerprint dataset.The complexity of the dataset is reduced using Principal Component Analysis(PCA)and the device type is identified by training weights using BLS.The experimental results show that the proposed method can achieve state-of-the-art accuracy and spend less training time than other existing methods. 展开更多
关键词 Device fingerprint Traffic analysis Class imbalance Broad learning system access authentication
下载PDF
Improving Channel Estimation in a NOMA Modulation Environment Based on Ensemble Learning
14
作者 Lassaad K.Smirani Leila Jamel Latifah Almuqren 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第8期1315-1337,共23页
This study presents a layered generalization ensemble model for next generation radio mobiles,focusing on supervised channel estimation approaches.Channel estimation typically involves the insertion of pilot symbols w... This study presents a layered generalization ensemble model for next generation radio mobiles,focusing on supervised channel estimation approaches.Channel estimation typically involves the insertion of pilot symbols with a well-balanced rhythm and suitable layout.The model,called Stacked Generalization for Channel Estimation(SGCE),aims to enhance channel estimation performance by eliminating pilot insertion and improving throughput.The SGCE model incorporates six machine learning methods:random forest(RF),gradient boosting machine(GB),light gradient boosting machine(LGBM),support vector regression(SVR),extremely randomized tree(ERT),and extreme gradient boosting(XGB).By generating meta-data from five models(RF,GB,LGBM,SVR,and ERT),we ensure accurate channel coefficient predictions using the XGB model.To validate themodeling performance,we employ the leave-one-out cross-validation(LOOCV)approach,where each observation serves as the validation set while the remaining observations act as the training set.SGCE performances’results demonstrate higher mean andmedian accuracy compared to the separatedmodel.SGCE achieves an average accuracy of 98.4%,precision of 98.1%,and the highest F1-score of 98.5%,accurately predicting channel coefficients.Furthermore,our proposedmethod outperforms prior traditional and intelligent techniques in terms of throughput and bit error rate.SGCE’s superior performance highlights its efficacy in optimizing channel estimation.It can effectively predict channel coefficients and contribute to enhancing the overall efficiency of radio mobile systems.Through extensive experimentation and evaluation,we demonstrate that SGCE improved performance in channel estimation,surpassing previous techniques.Accordingly,SGCE’s capabilities have significant implications for optimizing channel estimation in modern communication systems. 展开更多
关键词 Stacked generalization ensemble learning Non-Orthogonal Multiple access(NOMA) channel estimation 5G
下载PDF
Cluster-Based Massive Access for Massive MIMO Systems
15
作者 Shiyu Liang Wei Chen +2 位作者 Zhongwen Sun Ao Chen Bo Ai 《China Communications》 SCIE CSCD 2024年第1期24-33,共10页
Massive machine type communication aims to support the connection of massive devices,which is still an important scenario in 6G.In this paper,a novel cluster-based massive access method is proposed for massive multipl... Massive machine type communication aims to support the connection of massive devices,which is still an important scenario in 6G.In this paper,a novel cluster-based massive access method is proposed for massive multiple input multiple output systems.By exploiting the angular domain characteristics,devices are separated into multiple clusters with a learned cluster-specific dictionary,which enhances the identification of active devices.For detected active devices whose data recovery fails,power domain nonorthogonal multiple access with successive interference cancellation is employed to recover their data via re-transmission.Simulation results show that the proposed scheme and algorithm achieve improved performance on active user detection and data recovery. 展开更多
关键词 compressive sensing dictionary learning multiuser detection random access
下载PDF
移动学习的系统装备环境研究 被引量:5
16
作者 方海光 刘敏 安素芳 《现代教育技术》 CSSCI 2011年第2期22-27,共6页
文章将移动学习系统环境中的终端设备和网络接入方式统称为移动学习系统装备环境,在分别详细介绍了不同移动终端和无线网络的各自分类和特点之后,构建出通用的移动学习系统装备环境模型,基于该模型对常用的移动学习装备环境进行具体说明... 文章将移动学习系统环境中的终端设备和网络接入方式统称为移动学习系统装备环境,在分别详细介绍了不同移动终端和无线网络的各自分类和特点之后,构建出通用的移动学习系统装备环境模型,基于该模型对常用的移动学习装备环境进行具体说明,并对国内外现有的典型移动学习应用案例中移动学习系统装备环境的选择进行了系统地分类和归纳。最后,对未来移动学习装备环境的发展做出了展望。 展开更多
关键词 移动学习 装备环境 终端设备 网络接入
下载PDF
Policy Network-Based Dual-Agent Deep Reinforcement Learning for Multi-Resource Task Offloading in Multi-Access Edge Cloud Networks
17
作者 Feng Chuan Zhang Xu +2 位作者 Han Pengchao Ma Tianchun Gong Xiaoxue 《China Communications》 SCIE CSCD 2024年第4期53-73,共21页
The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC n... The Multi-access Edge Cloud(MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources,and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G.However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multiresource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue,we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features(NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms. 展开更多
关键词 benefit maximization deep reinforcement learning multi-access edge cloud task offloading
下载PDF
Characterization of Memory Access in Deep Learning and Its Implications in Memory Management
18
作者 Jeongha Lee Hyokyung Bahn 《Computers, Materials & Continua》 SCIE EI 2023年第7期607-629,共23页
Due to the recent trend of software intelligence in the Fourth Industrial Revolution,deep learning has become a mainstream workload for modern computer systems.Since the data size of deep learning increasingly grows,m... Due to the recent trend of software intelligence in the Fourth Industrial Revolution,deep learning has become a mainstream workload for modern computer systems.Since the data size of deep learning increasingly grows,managing the limited memory capacity efficiently for deep learning workloads becomes important.In this paper,we analyze memory accesses in deep learning workloads and find out some unique characteristics differentiated from traditional workloads.First,when comparing instruction and data accesses,data access accounts for 96%–99%of total memory accesses in deep learning workloads,which is quite different from traditional workloads.Second,when comparing read and write accesses,write access dominates,accounting for 64%–80%of total memory accesses.Third,although write access makes up the majority of memory accesses,it shows a low access bias of 0.3 in the Zipf parameter.Fourth,in predicting re-access,recency is important in read access,but frequency provides more accurate information in write access.Based on these observations,we introduce a Non-Volatile Random Access Memory(NVRAM)-accelerated memory architecture for deep learning workloads,and present a new memory management policy for this architecture.By considering the memory access characteristics of deep learning workloads,the proposed policy improves memory performance by 64.3%on average compared to the CLOCK policy. 展开更多
关键词 Memory access deep learning machine learning memory access memory management CLOCK
下载PDF
基于JiTT与Blending Learning理念的Access课程教学模式 被引量:4
19
作者 侯爽 《计算机教育》 2010年第24期105-107,共3页
在Access数据库程序设计实际教学中引进JiTT和Blening Learning教学理念,充分调动学生学习主动性、提高综合应用能力是一种有效的教学模式,本文阐述这种教学模式,并给出相应的评价体系。
关键词 JITT BLENDING learning 教学模式 access课程
下载PDF
监督实践的学习曲线在超声引导下动态针尖追踪法桡动脉穿刺置管中的应用 被引量:4
20
作者 田园 于春华 +3 位作者 阮侠 白冰 张越伦 黄宇光 《基础医学与临床》 2022年第2期348-352,共5页
目的分析并描绘北京协和医院麻醉科住院医师行超声引导动态针尖追踪法(DNTP)桡动脉穿刺置管的学习曲线,探讨掌握该法所需的监督实践学习强度。方法回顾2018年1月至2018年6月2名北京协和医院麻醉科住院医师使用超声引导DNTP桡动脉穿刺置... 目的分析并描绘北京协和医院麻醉科住院医师行超声引导动态针尖追踪法(DNTP)桡动脉穿刺置管的学习曲线,探讨掌握该法所需的监督实践学习强度。方法回顾2018年1月至2018年6月2名北京协和医院麻醉科住院医师使用超声引导DNTP桡动脉穿刺置管的监督实践学习记录,使用累积和分析法(CUSUM)描绘学习曲线,探讨掌握该法所需的监督实践的学习强度,并比较达到此学习强度前后成功穿刺置管所需穿刺次数和时间,以及首次和总体桡动脉穿刺置管成功率的差异。结果分别以成功穿刺置管所需穿刺次数和时间,以及首次和总体成功率为评价标准,2名受训者掌握DNTP所需学习强度为29/36、29/29、29/36、14/11例。达到该学习强度后,成功穿刺置管所需操作次数显著下降(P<0.05),所需时间、首次和总体成功率有所改善。结论综合考虑成功穿刺置管所需操作次数和时间,以及首次和总体成功率,掌握超声引导DNTP桡动脉穿刺置管的监督实践的学习强度为36例。 展开更多
关键词 学习曲线 麻醉学 教学 血管穿刺
下载PDF
上一页 1 2 10 下一页 到第
使用帮助 返回顶部