摘要
数据去重是大数据预处理过程中最主要的一个步骤。为了提升大数据去重的效率,以及优化其在较差情况下的表现,本文以中文微博的原始数据为基础,在传统的Simhash方法的基础上,改进计算相似度的公式,将文本重复率纳入考虑,并在检索步骤中采用桶排序的思想,进行多次多级的线程分配以提高效率。实验结果表明,改进后的算法可以显著提升传统算法的效率和准确率。
Data deduplication is a main step in big data preprocess. To improve efficiency in deduplication and optimize performance in terrible condition of classic algorithm, this paper uses Chinese text data of mieroblog and modifies formula of calculating similarity based on classic Simhash algorithm. Duplication rate is considered in the advanced formula, besides, this paper draws on the experience of bucket sorting, distributes threads for several times and levels to improve efficiency. The result of experiment shows that advanced algorithm can reduce running time and improve accuracy compared with classic algorithm.
出处
《计算机与现代化》
2017年第7期38-41,共4页
Computer and Modernization