中等规模风电通常经35 k V及以上电压等级系统直接并入地区电网,具有非均匀接入特性及不确定性,易导致地区电网出现局部消纳困难与局部负荷重载并存的现象,造成阻塞风险甚至消纳困境。基于高压配电网的拓扑重构能力,提出以110 kV变电单...中等规模风电通常经35 k V及以上电压等级系统直接并入地区电网,具有非均匀接入特性及不确定性,易导致地区电网出现局部消纳困难与局部负荷重载并存的现象,造成阻塞风险甚至消纳困境。基于高压配电网的拓扑重构能力,提出以110 kV变电单元组的可行拓扑状态为控制对象的高压配电网机会约束转供模型,利用风电-负荷误差的概率密度函数对随机变量进行多状态建模,以源荷功率均衡分布为目标,构建不确定性条件下以拓扑重构为手段的高压电网运行优化技术。实际算例测试表明,所提方法能有效疏导高渗透率风电非均匀接入后地区高压电网的消纳矛盾,并有助于提升高压电网资产的利用效率以平抑阻塞风险。展开更多
Most transactional memory (TM) research focused on multi-core processors, and others investigated at the clusters, leaving the area of non-uniform memory access (NUMA) system unexplored. The existing TM implementation...Most transactional memory (TM) research focused on multi-core processors, and others investigated at the clusters, leaving the area of non-uniform memory access (NUMA) system unexplored. The existing TM implementations made significant performance degradation on NUMA system because they ignored the slower remote memory access. To solve this problem, a latency-based conflict detection and a forecasting-based conflict prevention method were proposed. Using these techniques, the NUMA aware TM system was presented. By reducing the remote memory access and the abort rate of transaction, the experiment results show that the NUMA aware strategies present good practical TM performance on NUMA system.展开更多
With the rapid development of big data and artificial intelligence(AI),the cloud platform architecture system is constantly developing,optimizing,and improving.As such,new applications,like deep computing and high-per...With the rapid development of big data and artificial intelligence(AI),the cloud platform architecture system is constantly developing,optimizing,and improving.As such,new applications,like deep computing and high-performance computing,require enhanced computing power.To meet this requirement,a non-uniform memory access(NUMA)configuration method is proposed for the cloud computing system according to the affinity,adaptability,and availability of the NUMA architecture processor platform.The proposed method is verified based on the test environment of a domestic central processing unit(CPU).展开更多
The multicore evolution has stimulated renewed interests in scaling up applications on shared-memory multiprocessors,significantly improving the scalability of many applications.But the scalability is limited within a...The multicore evolution has stimulated renewed interests in scaling up applications on shared-memory multiprocessors,significantly improving the scalability of many applications.But the scalability is limited within a single node;therefore programmers still have to redesign applications to scale out over multiple nodes.This paper revisits the design and implementation of distributed shared memory (DSM)as a way to scale out applications optimized for non-uniform memory access (NUMA)architecture over a well-connected cluster.This paper presents MAGI,an efficient DSM system that provides a transparent shared address space with scalable performance on a cluster with fast network interfaces.MAGI is unique in that it presents a NUMA abstraction to fully harness the multicore resources in each node through hierarchical synchronization and memory management.MAGI also exploits the memory access patterns of big-data applications and leverages a set of optimizations for remote direct memory access (RDMA)to reduce the number of page faults and the cost of the coherence protocol.MAGI has been implemented as a user-space library with pthread-compatible interfaces and can run existing multithreaded applications with minimized modifications.We deployed MAGI over an 8-node RDMA-enabled cluster.Experimental evaluation shows that MAGI achieves up to 9.25:4 speedup compared with an unoptimized implementation,leading to a sealable performance for large-scale data-intensive applications.展开更多
The Distributed Shared Memory(DSM)architecture is widely used in today’s computer design to mitigate the ever-widening processing-memory gap,and it inevitably exhibits Non-Uniform Memory Access(NUMA)to shared-memory ...The Distributed Shared Memory(DSM)architecture is widely used in today’s computer design to mitigate the ever-widening processing-memory gap,and it inevitably exhibits Non-Uniform Memory Access(NUMA)to shared-memory parallel applications.Failure to adapt to the NUMA effect can significantly downgrade application performance,especially on today’s manycore platforms with tens to hundreds of cores.However,traditional approaches such as first-touch and memory policy fall short in false page-sharing,fragmentation,or ease of use.In this paper,we propose a partitioned shared-memory approach that allows multithreaded applications to achieve full NUMA-awareness with only minor code changes and develop an accompanying NUMA-aware heap manager which eliminates false page-sharing and minimizes fragmentation.Experiments on a 256-core cc-NUMA computing node show that the proposed approach helps applications to adapt to NUMA with only minor code changes and improves the performance of typical multithreaded scientific applications by up to 4.3 folds with the increased use of cores.展开更多
Godson-3 is the latest generation of Godson microprocessor family. It takes a scalable multi-core architecture with hardware support for accelerating applications including X86 emulation and signal processing. This pa...Godson-3 is the latest generation of Godson microprocessor family. It takes a scalable multi-core architecture with hardware support for accelerating applications including X86 emulation and signal processing. This paper introduces the system architecture of Godson-3 from various aspects including system scalability, organization of memory hierarchy, network-on-chip, inter-chip connection and I/O subsystem.展开更多
文摘中等规模风电通常经35 k V及以上电压等级系统直接并入地区电网,具有非均匀接入特性及不确定性,易导致地区电网出现局部消纳困难与局部负荷重载并存的现象,造成阻塞风险甚至消纳困境。基于高压配电网的拓扑重构能力,提出以110 kV变电单元组的可行拓扑状态为控制对象的高压配电网机会约束转供模型,利用风电-负荷误差的概率密度函数对随机变量进行多状态建模,以源荷功率均衡分布为目标,构建不确定性条件下以拓扑重构为手段的高压电网运行优化技术。实际算例测试表明,所提方法能有效疏导高渗透率风电非均匀接入后地区高压电网的消纳矛盾,并有助于提升高压电网资产的利用效率以平抑阻塞风险。
基金Projects(61003075, 61170261) supported by the National Natural Science Foundation of China
文摘Most transactional memory (TM) research focused on multi-core processors, and others investigated at the clusters, leaving the area of non-uniform memory access (NUMA) system unexplored. The existing TM implementations made significant performance degradation on NUMA system because they ignored the slower remote memory access. To solve this problem, a latency-based conflict detection and a forecasting-based conflict prevention method were proposed. Using these techniques, the NUMA aware TM system was presented. By reducing the remote memory access and the abort rate of transaction, the experiment results show that the NUMA aware strategies present good practical TM performance on NUMA system.
基金the National Key Research and Development Program of China(No.2017YFC0212100)National High-tech R&D Program of China(No.2015AA015308).
文摘With the rapid development of big data and artificial intelligence(AI),the cloud platform architecture system is constantly developing,optimizing,and improving.As such,new applications,like deep computing and high-performance computing,require enhanced computing power.To meet this requirement,a non-uniform memory access(NUMA)configuration method is proposed for the cloud computing system according to the affinity,adaptability,and availability of the NUMA architecture processor platform.The proposed method is verified based on the test environment of a domestic central processing unit(CPU).
基金the National Key Research and Development Program of China under Grant No.2016YFBI000500the National Natural Science Foundation of China under Grant No.61572314the National Youth Top-Notch Talent Support Program of China.
文摘The multicore evolution has stimulated renewed interests in scaling up applications on shared-memory multiprocessors,significantly improving the scalability of many applications.But the scalability is limited within a single node;therefore programmers still have to redesign applications to scale out over multiple nodes.This paper revisits the design and implementation of distributed shared memory (DSM)as a way to scale out applications optimized for non-uniform memory access (NUMA)architecture over a well-connected cluster.This paper presents MAGI,an efficient DSM system that provides a transparent shared address space with scalable performance on a cluster with fast network interfaces.MAGI is unique in that it presents a NUMA abstraction to fully harness the multicore resources in each node through hierarchical synchronization and memory management.MAGI also exploits the memory access patterns of big-data applications and leverages a set of optimizations for remote direct memory access (RDMA)to reduce the number of page faults and the cost of the coherence protocol.MAGI has been implemented as a user-space library with pthread-compatible interfaces and can run existing multithreaded applications with minimized modifications.We deployed MAGI over an 8-node RDMA-enabled cluster.Experimental evaluation shows that MAGI achieves up to 9.25:4 speedup compared with an unoptimized implementation,leading to a sealable performance for large-scale data-intensive applications.
基金supported by the National Key Research and Development Program of China(No.2016YFB0201300)。
文摘The Distributed Shared Memory(DSM)architecture is widely used in today’s computer design to mitigate the ever-widening processing-memory gap,and it inevitably exhibits Non-Uniform Memory Access(NUMA)to shared-memory parallel applications.Failure to adapt to the NUMA effect can significantly downgrade application performance,especially on today’s manycore platforms with tens to hundreds of cores.However,traditional approaches such as first-touch and memory policy fall short in false page-sharing,fragmentation,or ease of use.In this paper,we propose a partitioned shared-memory approach that allows multithreaded applications to achieve full NUMA-awareness with only minor code changes and develop an accompanying NUMA-aware heap manager which eliminates false page-sharing and minimizes fragmentation.Experiments on a 256-core cc-NUMA computing node show that the proposed approach helps applications to adapt to NUMA with only minor code changes and improves the performance of typical multithreaded scientific applications by up to 4.3 folds with the increased use of cores.
基金Supported by the National High Technology Development 863 Program of China under Grant No.2008AA010901the National Natural Science Foundation of China under Grant Nos.60736012 and 60673146the National Basic Research 973 Program of China under Grant No.2005CB321601.
文摘Godson-3 is the latest generation of Godson microprocessor family. It takes a scalable multi-core architecture with hardware support for accelerating applications including X86 emulation and signal processing. This paper introduces the system architecture of Godson-3 from various aspects including system scalability, organization of memory hierarchy, network-on-chip, inter-chip connection and I/O subsystem.