As the complexity of deep learning(DL)networks and training data grows enormously,methods that scale with computation are becoming the future of artificial intelligence(AI)development.In this regard,the interplay betw...As the complexity of deep learning(DL)networks and training data grows enormously,methods that scale with computation are becoming the future of artificial intelligence(AI)development.In this regard,the interplay between machine learning(ML)and high-performance computing(HPC)is an innovative paradigm to speed up the efficiency of AI research and development.However,building and operating an HPC/AI converged system require broad knowledge to leverage the latest computing,networking,and storage technologies.Moreover,an HPC-based AI computing environment needs an appropriate resource allocation and monitoring strategy to efficiently utilize the system resources.In this regard,we introduce a technique for building and operating a high-performance AI-computing environment with the latest technologies.Specifically,an HPC/AI converged system is configured inside Gwangju Institute of Science and Technology(GIST),called GIST AI-X computing cluster,which is built by leveraging the latest Nvidia DGX servers,high-performance storage and networking devices,and various open source tools.Therefore,it can be a good reference for building a small or middlesized HPC/AI converged system for research and educational institutes.In addition,we propose a resource allocation method for DL jobs to efficiently utilize the computing resources with multi-agent deep reinforcement learning(mDRL).Through extensive simulations and experiments,we validate that the proposed mDRL algorithm can help the HPC/AI converged cluster to achieve both system utilization and power consumption improvement.By deploying the proposed resource allocation method to the system,total job completion time is reduced by around 20%and inefficient power consumption is reduced by around 40%.展开更多
Traditional HPC (High Performance Computing) cluster is built on top of physical machines. It is usually not practical to reassign these machines to other tasks due to the fact that software installation is time con...Traditional HPC (High Performance Computing) cluster is built on top of physical machines. It is usually not practical to reassign these machines to other tasks due to the fact that software installation is time consuming. As a result, these machines are usually dedicated for the cluster usage. Virtualization technology provides an abstract layer which allows several different operating systems (with different software packages) running on top of one physical machine. Cloud computing provides an easy way for the user to manage and interact with the computing resources (the virtual machines in this case). In this work, we demonstrate the feasibility of building a cloud-based cluster for HPC on top of a set of desktop computers that are interconnected by means of Fast Ethernet. Our cluster has several advantages. For instance, the deployment time of the cluster is quite fast: We need only 5 min to deploy a cluster of 30 machines, Besides, several performance benchmarks have been carried out. As expected, the embarrassingly parallel problem has the linear relationship between the performance and the cluster size.展开更多
文摘As the complexity of deep learning(DL)networks and training data grows enormously,methods that scale with computation are becoming the future of artificial intelligence(AI)development.In this regard,the interplay between machine learning(ML)and high-performance computing(HPC)is an innovative paradigm to speed up the efficiency of AI research and development.However,building and operating an HPC/AI converged system require broad knowledge to leverage the latest computing,networking,and storage technologies.Moreover,an HPC-based AI computing environment needs an appropriate resource allocation and monitoring strategy to efficiently utilize the system resources.In this regard,we introduce a technique for building and operating a high-performance AI-computing environment with the latest technologies.Specifically,an HPC/AI converged system is configured inside Gwangju Institute of Science and Technology(GIST),called GIST AI-X computing cluster,which is built by leveraging the latest Nvidia DGX servers,high-performance storage and networking devices,and various open source tools.Therefore,it can be a good reference for building a small or middlesized HPC/AI converged system for research and educational institutes.In addition,we propose a resource allocation method for DL jobs to efficiently utilize the computing resources with multi-agent deep reinforcement learning(mDRL).Through extensive simulations and experiments,we validate that the proposed mDRL algorithm can help the HPC/AI converged cluster to achieve both system utilization and power consumption improvement.By deploying the proposed resource allocation method to the system,total job completion time is reduced by around 20%and inefficient power consumption is reduced by around 40%.
文摘Traditional HPC (High Performance Computing) cluster is built on top of physical machines. It is usually not practical to reassign these machines to other tasks due to the fact that software installation is time consuming. As a result, these machines are usually dedicated for the cluster usage. Virtualization technology provides an abstract layer which allows several different operating systems (with different software packages) running on top of one physical machine. Cloud computing provides an easy way for the user to manage and interact with the computing resources (the virtual machines in this case). In this work, we demonstrate the feasibility of building a cloud-based cluster for HPC on top of a set of desktop computers that are interconnected by means of Fast Ethernet. Our cluster has several advantages. For instance, the deployment time of the cluster is quite fast: We need only 5 min to deploy a cluster of 30 machines, Besides, several performance benchmarks have been carried out. As expected, the embarrassingly parallel problem has the linear relationship between the performance and the cluster size.