This paper presents a novel hierarchy cache architecture for the purpose of optimizing IO performance. The main idea of the hierarchy cache is to use a few megabytes of RAM and a pagefile to form a two-level cache arc...This paper presents a novel hierarchy cache architecture for the purpose of optimizing IO performance. The main idea of the hierarchy cache is to use a few megabytes of RAM and a pagefile to form a two-level cache architecture, The pagefile is equivalent to the cache disk in DCD (Disk Caching Disk). The pagefile outperforms data disks, because data are accessed in different units and different ways. Small writes are collected in the RAM cache first, and data will be transferred to the pagefile in large writes later. When the system is idle, it will destage data from the pagefile to data disks. The performance test results show that the hierarchy cache can improve IO performance dramatically for small writes, and the mail server using the hierarchy cache driver can handle transactions about 2.2 times faster than the normal mail server. The hierarchy cache is implemented as a filter driver, so it's transparent to the current Windows 2000/ Windows XP operating system. Key words hierarchy cache - pagefile - small write - disk caching disk - filter driver CLC number TP 311 Foundation item: Supported by the National Natural Science Foundation of China (60273073)Biography: XIE chang-sheng (1945-), male Professor, research direction: storage system, network storage.展开更多
Docker,as a mainstream container solution,adopts the Copy-on-Write(CoW)mechanism in its storage drivers.This mechanism satisfies the need of different containers to share the same image.However,when a single container...Docker,as a mainstream container solution,adopts the Copy-on-Write(CoW)mechanism in its storage drivers.This mechanism satisfies the need of different containers to share the same image.However,when a single container performs operations such as modification of an image file,a duplicate is created in the upper readwrite layer,which contributes to the runtime overhead.When the accessed image file is fairly large,this additional overhead becomes non-negligible.Here we present the concept of Dynamic Prefetching Strategy Optimization(DPSO),which optimizes the Co W mechanism for a Docker container on the basis of the dynamic prefetching strategy.At the beginning of the container life cycle,DPSO pre-copies up the image files that are most likely to be copied up later to eliminate the overhead caused by performing this operation during application runtime.The experimental results show that DPSO has an average prefetch accuracy of greater than 78%in complex scenarios and could effectively eliminate the overhead caused by the CoW mechanism.展开更多
文摘This paper presents a novel hierarchy cache architecture for the purpose of optimizing IO performance. The main idea of the hierarchy cache is to use a few megabytes of RAM and a pagefile to form a two-level cache architecture, The pagefile is equivalent to the cache disk in DCD (Disk Caching Disk). The pagefile outperforms data disks, because data are accessed in different units and different ways. Small writes are collected in the RAM cache first, and data will be transferred to the pagefile in large writes later. When the system is idle, it will destage data from the pagefile to data disks. The performance test results show that the hierarchy cache can improve IO performance dramatically for small writes, and the mail server using the hierarchy cache driver can handle transactions about 2.2 times faster than the normal mail server. The hierarchy cache is implemented as a filter driver, so it's transparent to the current Windows 2000/ Windows XP operating system. Key words hierarchy cache - pagefile - small write - disk caching disk - filter driver CLC number TP 311 Foundation item: Supported by the National Natural Science Foundation of China (60273073)Biography: XIE chang-sheng (1945-), male Professor, research direction: storage system, network storage.
基金supported by the National Key Research and Development Program of China(No.2018YFB1003203)the National Natural Science Foundation of China(Nos.61772218 and 61433019)+1 种基金the Outstanding Youth Foundation of Hubei Province(No.2016CFA032)the Chinese Universities Scientific Fund(No.2019kfyRCPY030)。
文摘Docker,as a mainstream container solution,adopts the Copy-on-Write(CoW)mechanism in its storage drivers.This mechanism satisfies the need of different containers to share the same image.However,when a single container performs operations such as modification of an image file,a duplicate is created in the upper readwrite layer,which contributes to the runtime overhead.When the accessed image file is fairly large,this additional overhead becomes non-negligible.Here we present the concept of Dynamic Prefetching Strategy Optimization(DPSO),which optimizes the Co W mechanism for a Docker container on the basis of the dynamic prefetching strategy.At the beginning of the container life cycle,DPSO pre-copies up the image files that are most likely to be copied up later to eliminate the overhead caused by performing this operation during application runtime.The experimental results show that DPSO has an average prefetch accuracy of greater than 78%in complex scenarios and could effectively eliminate the overhead caused by the CoW mechanism.