This study investigates the scheduling problem ofmultiple agile optical satelliteswith large-scale tasks.This problem is difficult to solve owing to the time-dependent characteristic of agile optical satellites,comple...This study investigates the scheduling problem ofmultiple agile optical satelliteswith large-scale tasks.This problem is difficult to solve owing to the time-dependent characteristic of agile optical satellites,complex constraints,and considerable solution space.To solve the problem,we propose a scheduling method based on an improved sine and cosine algorithm and a task merging approach.We first establish a scheduling model with task merging constraints and observation action constraints to describe the problem.Then,an improved sine and cosine algorithm is proposed to search for the optimal solution with the maximum profit ratio.An adaptive cosine factor and an adaptive greedy factor are adopted to improve the algorithm.Besides,a taskmerging method with a task reallocation mechanism is developed to improve the scheduling efficiency.Experimental results demonstrate the superiority of the proposed algorithm over the comparison algorithms.展开更多
传统模式下,卫星采取单任务观测方式,该种方式下任务的成像精度高但任务成像数量少且资源使用率极低。因此,在单任务观测方式的基础上设计了一种多任务合成机制(multi-task merging mechanism,MTMM),在保证用户最低成像要求的情况下对...传统模式下,卫星采取单任务观测方式,该种方式下任务的成像精度高但任务成像数量少且资源使用率极低。因此,在单任务观测方式的基础上设计了一种多任务合成机制(multi-task merging mechanism,MTMM),在保证用户最低成像要求的情况下对任务合成。首先,基于合成任务集,建立多星调度模型。然后,针对模型提出了基于任务合成的改进蚁群优化(improved ant colony optimization based on task merging,IACO-TM)算法,在算法中设计了自适应蚁窗策略、强制扰动机制以及算法参数动态调节策略,对蚂蚁搜索空间进行有效裁剪,避免算法陷入局部最优的同时提高算法的收敛速度。最后,通过大量仿真实验与不考虑任务合成的改进蚁群优化(improved ant colony optimization,IACO)算法和基于任务合成的传统蚁群优化(traditional ant colony optimization based on task merging,TACO-TM)算法对比,验证了所提MTMM和IACO-TM的有效性。展开更多
The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.How...The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.However,the complexity of resource allocation is increased because of the large number of tasks and satellites.Therefore,the primary problem of implementing concurrent multiple tasks via LEO mega-constellation is to pre-process tasks and observation re-sources.To address the challenge,we propose a pre-processing algorithm for the mega-constellation based on highly Dynamic Spatio-Temporal Grids(DSTG).In the first stage,this paper describes the management model of mega-constellation and the multiple tasks.Then,the coding method of DSTG is proposed,based on which the description of complex mega-constellation observation resources is realized.In the third part,the DSTG algorithm is used to realize the processing of concurrent multiple tasks at multiple levels,such as task space attribute,time attribute and grid task importance evaluation.Finally,the simulation result of the proposed method in the case of constellation has been given to verify the effectiveness of concurrent multi-task pre-processing based on DSTG.The autonomous processing process of task decomposition and task fusion and mapping to grids,and the convenient indexing process of time window are verified.展开更多
Data quality has exerted important influence over the application of grain big data, so data cleaning is a necessary and important work. In MapReduce frame, parallel technique is often used to execute data cleaning in...Data quality has exerted important influence over the application of grain big data, so data cleaning is a necessary and important work. In MapReduce frame, parallel technique is often used to execute data cleaning in high scalability mode, but due to the lack of effective design, there are amounts of computing redundancy in the process of data cleaning, which results in lower performance. In this research, we found that some tasks often are carried out multiple times on same input files, or require same operation results in the process of data cleaning. For this problem, we proposed a new optimization technique that is based on task merge. By merging simple or redundancy computations on same input files, the number of the loop computation in MapReduce can be reduced greatly. The experiment shows, by this means, the overall system runtime is significantly reduced, which proves that the process of data cleaning is optimized. In this paper, we optimized several modules of data cleaning such as entity identification, inconsistent data restoration, and missing value filling. Experimental results show that the proposed method in this paper can increase efficiency for grain big data cleaning.展开更多
基金supported by Science and Technology on Complex Electronic System Simulation Laboratory (Funding No.6142401003022109).
文摘This study investigates the scheduling problem ofmultiple agile optical satelliteswith large-scale tasks.This problem is difficult to solve owing to the time-dependent characteristic of agile optical satellites,complex constraints,and considerable solution space.To solve the problem,we propose a scheduling method based on an improved sine and cosine algorithm and a task merging approach.We first establish a scheduling model with task merging constraints and observation action constraints to describe the problem.Then,an improved sine and cosine algorithm is proposed to search for the optimal solution with the maximum profit ratio.An adaptive cosine factor and an adaptive greedy factor are adopted to improve the algorithm.Besides,a taskmerging method with a task reallocation mechanism is developed to improve the scheduling efficiency.Experimental results demonstrate the superiority of the proposed algorithm over the comparison algorithms.
文摘传统模式下,卫星采取单任务观测方式,该种方式下任务的成像精度高但任务成像数量少且资源使用率极低。因此,在单任务观测方式的基础上设计了一种多任务合成机制(multi-task merging mechanism,MTMM),在保证用户最低成像要求的情况下对任务合成。首先,基于合成任务集,建立多星调度模型。然后,针对模型提出了基于任务合成的改进蚁群优化(improved ant colony optimization based on task merging,IACO-TM)算法,在算法中设计了自适应蚁窗策略、强制扰动机制以及算法参数动态调节策略,对蚂蚁搜索空间进行有效裁剪,避免算法陷入局部最优的同时提高算法的收敛速度。最后,通过大量仿真实验与不考虑任务合成的改进蚁群优化(improved ant colony optimization,IACO)算法和基于任务合成的传统蚁群优化(traditional ant colony optimization based on task merging,TACO-TM)算法对比,验证了所提MTMM和IACO-TM的有效性。
基金supported by the National Natural Science Foundation of China(Nos.62003115 and 11972130)the Shenzhen Science and Technology Program,China(JCYJ20220818102207015)the Heilongjiang Touyan Team Program,China。
文摘The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.However,the complexity of resource allocation is increased because of the large number of tasks and satellites.Therefore,the primary problem of implementing concurrent multiple tasks via LEO mega-constellation is to pre-process tasks and observation re-sources.To address the challenge,we propose a pre-processing algorithm for the mega-constellation based on highly Dynamic Spatio-Temporal Grids(DSTG).In the first stage,this paper describes the management model of mega-constellation and the multiple tasks.Then,the coding method of DSTG is proposed,based on which the description of complex mega-constellation observation resources is realized.In the third part,the DSTG algorithm is used to realize the processing of concurrent multiple tasks at multiple levels,such as task space attribute,time attribute and grid task importance evaluation.Finally,the simulation result of the proposed method in the case of constellation has been given to verify the effectiveness of concurrent multi-task pre-processing based on DSTG.The autonomous processing process of task decomposition and task fusion and mapping to grids,and the convenient indexing process of time window are verified.
文摘Data quality has exerted important influence over the application of grain big data, so data cleaning is a necessary and important work. In MapReduce frame, parallel technique is often used to execute data cleaning in high scalability mode, but due to the lack of effective design, there are amounts of computing redundancy in the process of data cleaning, which results in lower performance. In this research, we found that some tasks often are carried out multiple times on same input files, or require same operation results in the process of data cleaning. For this problem, we proposed a new optimization technique that is based on task merge. By merging simple or redundancy computations on same input files, the number of the loop computation in MapReduce can be reduced greatly. The experiment shows, by this means, the overall system runtime is significantly reduced, which proves that the process of data cleaning is optimized. In this paper, we optimized several modules of data cleaning such as entity identification, inconsistent data restoration, and missing value filling. Experimental results show that the proposed method in this paper can increase efficiency for grain big data cleaning.