Graph is a well known data structure to represent the associated relationships in a variety of applications,e.g.,data science and machine learning.Despite a wealth of existing efforts on developing graph processing sy...Graph is a well known data structure to represent the associated relationships in a variety of applications,e.g.,data science and machine learning.Despite a wealth of existing efforts on developing graph processing systems for improving the performance and/or energy efficiency on traditional architectures,dedicated hardware solutions,also referred to as graph processing accelerators,are essential and emerging to provide the benefits significantly beyond what those pure software solutions can offer.In this paper,we conduct a systematical survey regarding the design and implementation of graph processing accelerators.Specifically,we review the relevant techniques in three core components toward a graph processing accelerator:preprocessing,parallel graph computation,and runtime scheduling.We also examine the benchmarks and results in existing studies for evaluating a graph processing accelerator.Interestingly,we find that there is not an absolute winner for all three aspects in graph acceleration due to the diverse characteristics of graph processing and the complexity of hardware configurations.We finally present and discuss several challenges in details,and further explore the opportunities for the future research.展开更多
Recently,due to the availability of big data and the rapid growth of computing power,artificial intelligence(AI)has regained tremendous attention and investment.Machine learning(ML)approaches have been successfully ap...Recently,due to the availability of big data and the rapid growth of computing power,artificial intelligence(AI)has regained tremendous attention and investment.Machine learning(ML)approaches have been successfully applied to solve many problems in academia and in industry.Although the explosion of big data applications is driving the development of ML,it also imposes severe challenges of data processing speed and scalability on conventional computer systems.Computing platforms that are dedicatedly designed for AI applications have been considered,ranging from a complement to von Neumann platforms to a“must-have”and stand-alone technical solution.These platforms,which belong to a larger category named“domain-specific computing,”focus on specific customization for AI.In this article,we focus on summarizing the recent advances in accelerator designs for deep neural networks(DNNs)-that is,DNN accelerators.We discuss various architectures that support DNN executions in terms of computing units,dataflow optimization,targeted network topologies,architectures on emerging technologies,and accelerators for emerging applications.We also provide our visions on the future trend of AI chip designs.展开更多
基于网络控制系统(networked control system,NCS)的网络空间特性,对NCS信息安全的时空特性和NCS的安全域进行了研究,在此基础上提出了一种具有自律可控性和自律可协调性的NCS安全域的新型体系结构,并对新构建的NCS安全域体系的一些关...基于网络控制系统(networked control system,NCS)的网络空间特性,对NCS信息安全的时空特性和NCS的安全域进行了研究,在此基础上提出了一种具有自律可控性和自律可协调性的NCS安全域的新型体系结构,并对新构建的NCS安全域体系的一些关键问题进行了探讨。展开更多
基金the National Key Research and Development Program of China under Grant No.2018YFB1003502the National Natural Science Foundation of China under Grant Nos.61825202,61832006,61628204 and 61702201.
文摘Graph is a well known data structure to represent the associated relationships in a variety of applications,e.g.,data science and machine learning.Despite a wealth of existing efforts on developing graph processing systems for improving the performance and/or energy efficiency on traditional architectures,dedicated hardware solutions,also referred to as graph processing accelerators,are essential and emerging to provide the benefits significantly beyond what those pure software solutions can offer.In this paper,we conduct a systematical survey regarding the design and implementation of graph processing accelerators.Specifically,we review the relevant techniques in three core components toward a graph processing accelerator:preprocessing,parallel graph computation,and runtime scheduling.We also examine the benchmarks and results in existing studies for evaluating a graph processing accelerator.Interestingly,we find that there is not an absolute winner for all three aspects in graph acceleration due to the diverse characteristics of graph processing and the complexity of hardware configurations.We finally present and discuss several challenges in details,and further explore the opportunities for the future research.
基金the National Science Foundations(NSFs)(1822085,1725456,1816833,1500848,1719160,and 1725447)the NSF Computing and Communication Foundations(1740352)+1 种基金the Nanoelectronics COmputing REsearch Program in the Semiconductor Research Corporation(NC-2766-A)the Center for Research in Intelligent Storage and Processing-in-Memory,one of six centers in the Joint University Microelectronics Program,a SRC program sponsored by Defense Advanced Research Projects Agency.
文摘Recently,due to the availability of big data and the rapid growth of computing power,artificial intelligence(AI)has regained tremendous attention and investment.Machine learning(ML)approaches have been successfully applied to solve many problems in academia and in industry.Although the explosion of big data applications is driving the development of ML,it also imposes severe challenges of data processing speed and scalability on conventional computer systems.Computing platforms that are dedicatedly designed for AI applications have been considered,ranging from a complement to von Neumann platforms to a“must-have”and stand-alone technical solution.These platforms,which belong to a larger category named“domain-specific computing,”focus on specific customization for AI.In this article,we focus on summarizing the recent advances in accelerator designs for deep neural networks(DNNs)-that is,DNN accelerators.We discuss various architectures that support DNN executions in terms of computing units,dataflow optimization,targeted network topologies,architectures on emerging technologies,and accelerators for emerging applications.We also provide our visions on the future trend of AI chip designs.