Kubernetes is an open-source container management tool which automates container deployment,container load balancing and container(de)scaling,including Horizontal Pod Autoscaler(HPA),Vertical Pod Autoscaler(VPA).HPA e...Kubernetes is an open-source container management tool which automates container deployment,container load balancing and container(de)scaling,including Horizontal Pod Autoscaler(HPA),Vertical Pod Autoscaler(VPA).HPA enables flawless operation,interactively scaling the number of resource units,or pods,without downtime.Default Resource Metrics,such as CPU and memory use of host machines and pods,are monitored by Kubernetes.Cloud Computing has emerged as a platform for individuals beside the corporate sector.It provides cost-effective infrastructure,platform and software services in a shared environment.On the other hand,the emergence of industry 4.0 brought new challenges for the adaptability and infusion of cloud computing.As the global work environment is adapting constituents of industry 4.0 in terms of robotics,artificial intelligence and IoT devices,it is becoming eminent that one emerging challenge is collaborative schematics.Provision of such autonomous mechanism that can develop,manage and operationalize digital resources like CoBots to perform tasks in a distributed and collaborative cloud environment for optimized utilization of resources,ensuring schedule completion.Collaborative schematics are also linked with Bigdata management produced by large scale industry 4.0 setups.Different use cases and simulation results showed a significant improvement in Pod CPU utilization,latency,and throughput over Kubernetes environment.展开更多
Background The control system of the China ADS front-end demo linac(CAFe)facility has been built based on a software tool entitled the“Experimental Physics and Industrial Control System”(EPICS).In the EPICS system,t...Background The control system of the China ADS front-end demo linac(CAFe)facility has been built based on a software tool entitled the“Experimental Physics and Industrial Control System”(EPICS).In the EPICS system,the online acquisitions,storage and historical data queries are completed in the Archiver Appliance system.For the construction,upgrading,operation and maintenance of the CAFe facility,it is necessary to use a convenient and reliable archiver system.To reconstruct the CAFe accelerator from a research facility to an experimental facility,some functions of the control system have been reformed to improve the reliability of the facility operation.Purpose This paper presents a new deployment of the Archiver Appliance.The Archiver engine is deployed in a Kubernetes(K8S)cluster as a container.The horizontal pod scaling(HPA)strategy was formulated to implement the automatic expansion and contraction of the application.Methods The Docker containers of the Archiver Appliance have been built and run in the pods of the K8S clusters,which can implement automatic expansion and contraction of the application.The combination scheme with the Prometheus,Dashboard and Grafana schemes has been used to jointly build a monitoring system for the cluster and the pods inside the cluster.Results Currently,the Archiver Appliance system with K8S clusters is under commissioning at the CAFe facility.This deployment method fulfills the requirements of the CAFe experimental facility for portability,high availability and disaster tolerance of archive server software.展开更多
Container-based virtualization techniques are becoming an alternative to traditional virtual machines,due to less overhead and better scaling.As one of the most widely used open-source container orchestration systems,...Container-based virtualization techniques are becoming an alternative to traditional virtual machines,due to less overhead and better scaling.As one of the most widely used open-source container orchestration systems,Kubernetes provides a built-in mechanism,that is,horizontal pod autoscaler(HPA),for dynamic resource provisioning.By default,scaling pods only based on CPU utilization,a single performance metric,HPA may create more pods than actually needed.Through extensive measurements of a containerized n-tier application benchmark,RUBBoS,we find that excessive pods consume more CPU and memory and even deteriorate response times of applications,due to interference.Furthermore,a Kubernetes service does not balance incoming requests among old pods and new pods created by HPA,due to stateful HTTP.In this paper,we propose a bi-metric approach to scaling pods by taking into account both CPU utilization and utilization of a thread pool,which is a kind of important soft resource in Httpd and Tomcat.Our approach collects the utilization of CPU and memory of pods.Meanwhile,it makes use of ELBA,a milli-bottleneck detector,to calculate queue lengths of Httpd and Tomcat pods and then evaluate the utilization of their thread pools.Based on the utilization of both CPU and thread pools,our approach could scale up less replicas of Httpd and Tomcat pods,contributing to a reduction of hardware resource utilization.At the same time,our approach leverages preStop hook along with liveness and readiness probes to relieve load imbalance among old Tomcat pods and new ones.Based on the containerized RUBBoS,our experimental results show that the proposed approach could not only reduce the usage of CPU and memory by as much as 14%and 24%when compared with HPA,but also relieve the load imbalance to reduce average response time of requests by as much as 80%.Our approach also demonstrates that it is better to scale pods by multiple metrics rather than a single one.展开更多
由于边缘云没有比中心云更强大的计算处理能力,在应对动态负载时很容易导致无意义的扩展抖动或资源处理能力不足的问题,所以在一个真实的边缘云环境中对微服务应用程序使用两个合成和两个实际工作负载进行实验评估,并提出了一种基于负...由于边缘云没有比中心云更强大的计算处理能力,在应对动态负载时很容易导致无意义的扩展抖动或资源处理能力不足的问题,所以在一个真实的边缘云环境中对微服务应用程序使用两个合成和两个实际工作负载进行实验评估,并提出了一种基于负载预测的混合自动扩展方法(predictively horizontal and vertical pod autoscaling,Pre-HVPA)。该方法首先采用机器学习对负载数据特征进行预测,并获得最终负载预测结果。然后利用预测负载进行水平和垂直的混合自动扩展。仿真结果表明,基于该方法所进行自动扩展可以减少扩展抖动和容器使用数量,所以适用于边缘云环境中的微服务应用。展开更多
文摘Kubernetes is an open-source container management tool which automates container deployment,container load balancing and container(de)scaling,including Horizontal Pod Autoscaler(HPA),Vertical Pod Autoscaler(VPA).HPA enables flawless operation,interactively scaling the number of resource units,or pods,without downtime.Default Resource Metrics,such as CPU and memory use of host machines and pods,are monitored by Kubernetes.Cloud Computing has emerged as a platform for individuals beside the corporate sector.It provides cost-effective infrastructure,platform and software services in a shared environment.On the other hand,the emergence of industry 4.0 brought new challenges for the adaptability and infusion of cloud computing.As the global work environment is adapting constituents of industry 4.0 in terms of robotics,artificial intelligence and IoT devices,it is becoming eminent that one emerging challenge is collaborative schematics.Provision of such autonomous mechanism that can develop,manage and operationalize digital resources like CoBots to perform tasks in a distributed and collaborative cloud environment for optimized utilization of resources,ensuring schedule completion.Collaborative schematics are also linked with Bigdata management produced by large scale industry 4.0 setups.Different use cases and simulation results showed a significant improvement in Pod CPU utilization,latency,and throughput over Kubernetes environment.
文摘Background The control system of the China ADS front-end demo linac(CAFe)facility has been built based on a software tool entitled the“Experimental Physics and Industrial Control System”(EPICS).In the EPICS system,the online acquisitions,storage and historical data queries are completed in the Archiver Appliance system.For the construction,upgrading,operation and maintenance of the CAFe facility,it is necessary to use a convenient and reliable archiver system.To reconstruct the CAFe accelerator from a research facility to an experimental facility,some functions of the control system have been reformed to improve the reliability of the facility operation.Purpose This paper presents a new deployment of the Archiver Appliance.The Archiver engine is deployed in a Kubernetes(K8S)cluster as a container.The horizontal pod scaling(HPA)strategy was formulated to implement the automatic expansion and contraction of the application.Methods The Docker containers of the Archiver Appliance have been built and run in the pods of the K8S clusters,which can implement automatic expansion and contraction of the application.The combination scheme with the Prometheus,Dashboard and Grafana schemes has been used to jointly build a monitoring system for the cluster and the pods inside the cluster.Results Currently,the Archiver Appliance system with K8S clusters is under commissioning at the CAFe facility.This deployment method fulfills the requirements of the CAFe experimental facility for portability,high availability and disaster tolerance of archive server software.
基金The research has been supported by a grant from NSFC(Grant No.61702063)Fundamental Science and by a grant from Frontier Technology Research Projects of Chongqing(cstc2017jcyjAX0089)China Scholarship Council(201708505099).
文摘Container-based virtualization techniques are becoming an alternative to traditional virtual machines,due to less overhead and better scaling.As one of the most widely used open-source container orchestration systems,Kubernetes provides a built-in mechanism,that is,horizontal pod autoscaler(HPA),for dynamic resource provisioning.By default,scaling pods only based on CPU utilization,a single performance metric,HPA may create more pods than actually needed.Through extensive measurements of a containerized n-tier application benchmark,RUBBoS,we find that excessive pods consume more CPU and memory and even deteriorate response times of applications,due to interference.Furthermore,a Kubernetes service does not balance incoming requests among old pods and new pods created by HPA,due to stateful HTTP.In this paper,we propose a bi-metric approach to scaling pods by taking into account both CPU utilization and utilization of a thread pool,which is a kind of important soft resource in Httpd and Tomcat.Our approach collects the utilization of CPU and memory of pods.Meanwhile,it makes use of ELBA,a milli-bottleneck detector,to calculate queue lengths of Httpd and Tomcat pods and then evaluate the utilization of their thread pools.Based on the utilization of both CPU and thread pools,our approach could scale up less replicas of Httpd and Tomcat pods,contributing to a reduction of hardware resource utilization.At the same time,our approach leverages preStop hook along with liveness and readiness probes to relieve load imbalance among old Tomcat pods and new ones.Based on the containerized RUBBoS,our experimental results show that the proposed approach could not only reduce the usage of CPU and memory by as much as 14%and 24%when compared with HPA,but also relieve the load imbalance to reduce average response time of requests by as much as 80%.Our approach also demonstrates that it is better to scale pods by multiple metrics rather than a single one.
文摘由于边缘云没有比中心云更强大的计算处理能力,在应对动态负载时很容易导致无意义的扩展抖动或资源处理能力不足的问题,所以在一个真实的边缘云环境中对微服务应用程序使用两个合成和两个实际工作负载进行实验评估,并提出了一种基于负载预测的混合自动扩展方法(predictively horizontal and vertical pod autoscaling,Pre-HVPA)。该方法首先采用机器学习对负载数据特征进行预测,并获得最终负载预测结果。然后利用预测负载进行水平和垂直的混合自动扩展。仿真结果表明,基于该方法所进行自动扩展可以减少扩展抖动和容器使用数量,所以适用于边缘云环境中的微服务应用。