摘要
评述了MapReduce与Spark两种大数据计算算法和架构,从背景、原理以及应用场景进行分析和比较,并对两种算法各自优点以及相应的限制做出了总结.当处理非迭代问题时,MapReduce凭借其自身的任务调度策略和shuffle机制,在中间数据传输数量以及文件数目方面的性能要优于Spark;而在处理迭代问题和一些低延迟问题时,Spark可以根据数据之间的依赖关系对任务进行更合理的划分,相较于MapReduce,有效地减少了中间数据传输数量与同步次数,提高了系统的运行效率.
This paper reviews two state-of-the-art algorithmic architectures, MapReduce and Spark, and compares them from their backgrounds, principles and application scenarios. The advantages and their corresponding limitations of these two algorithms are summarized. When dealing with non-iterative problems, MapReduce, by virtue of its task scheduling strategy and shuffle mechanisms, performs better than Spark in terms of intermediate data transfers and number of files. Spark can be used to deal with iterative problems and low latency issues, as it divides a computing task according to the dependencies between the data and the task. Compared with MapReduce, Spark can effectively reduce the number of intermediate data transmissions and the number of synchronizations, and improve the running efficiency of computing systems.
作者
吴信东
嵇圣硙
WU Xin-Dong;JI Sheng-Wei(School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230009, China;School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette 70504, USA)
出处
《软件学报》
EI
CSCD
北大核心
2018年第6期1770-1791,共22页
Journal of Software
基金
国家重点研发计划(2016YFB1000901)
国家自然科学基金(91746209)
教育部创新团队项目(IRT17R3)~~