为了提高无线网络中基于网络编码的广播重传方法的编码效率,从而有效地减少重传次数和数据包传输时延,提出一种主动避免编码冗余的高效网络编码广播重传方法(network coding broadcasting retransmission approach based on redundancy ...为了提高无线网络中基于网络编码的广播重传方法的编码效率,从而有效地减少重传次数和数据包传输时延,提出一种主动避免编码冗余的高效网络编码广播重传方法(network coding broadcasting retransmission approach based on redundancy avoiding,NCRA)。NCRA编码时主动避免不能解码的编码组合被重复编码重传,同时优先编码重传对接收节点已缓存的未解码编码包的解码贡献较大的丢失数据包以充分利用编码机会,在对解码贡献相同的条件下优先编码较早丢失的数据包以减小数据包传输时延。理论分析和仿真结果表明,NCRA算法相比于现有算法能有效减小重传次数和降低数据包传输时延,减少网络开销,进一步提高了编码重传的效率。展开更多
This paper presents some new algorithms to efficiently mine max frequent generalized itemsets (g-itemsets) and essential generalized association rules (g-rules). These are compact and general representations for a...This paper presents some new algorithms to efficiently mine max frequent generalized itemsets (g-itemsets) and essential generalized association rules (g-rules). These are compact and general representations for all frequent patterns and all strong association rules in the generalized environment. Our results fill an important gap among algorithms for frequent patterns and association rules by combining two concepts. First, generalized itemsets employ a taxonomy of items, rather than a flat list of items. This produces more natural frequent itemsets and associations such as (meat, milk) instead of (beef, milk), (chicken, milk), etc. Second, compact representations of frequent itemsets and strong rules, whose result size is exponentially smaller, can solve a standard dilemma in mining patterns: with small threshold values for support and confidence, the user is overwhelmed by the extraordinary number of identified patterns and associations; but with large threshold values, some interesting patterns and associations fail to be identified. Our algorithms can also expand those max frequent g-itemsets and essential g-rules into the much larger set of ordinary frequent g-itemsets and strong g-rules. While that expansion is not recommended in most practical cases, we do so in order to present a comparison with existing algorithms that only handle ordinary frequent g-itemsets. In this case, the new algorithm is shown to be thousands, and in some cases millions, of the time faster than previous algorithms. Further, the new algorithm succeeds in analyzing deeper taxonomies, with the depths of seven or more. Experimental results for previous algorithms limited themselves to taxonomies with depth at most three or four. In each of the two problems, a straightforward lattice-based approach is briefly discussed and then a classificationbased algorithm is developed. In particular, the two classification-based algorithms are MFGI_class for mining max frequent g-itemsets and EGR_class for mining essential g展开更多
文摘为了提高无线网络中基于网络编码的广播重传方法的编码效率,从而有效地减少重传次数和数据包传输时延,提出一种主动避免编码冗余的高效网络编码广播重传方法(network coding broadcasting retransmission approach based on redundancy avoiding,NCRA)。NCRA编码时主动避免不能解码的编码组合被重复编码重传,同时优先编码重传对接收节点已缓存的未解码编码包的解码贡献较大的丢失数据包以充分利用编码机会,在对解码贡献相同的条件下优先编码较早丢失的数据包以减小数据包传输时延。理论分析和仿真结果表明,NCRA算法相比于现有算法能有效减小重传次数和降低数据包传输时延,减少网络开销,进一步提高了编码重传的效率。
文摘This paper presents some new algorithms to efficiently mine max frequent generalized itemsets (g-itemsets) and essential generalized association rules (g-rules). These are compact and general representations for all frequent patterns and all strong association rules in the generalized environment. Our results fill an important gap among algorithms for frequent patterns and association rules by combining two concepts. First, generalized itemsets employ a taxonomy of items, rather than a flat list of items. This produces more natural frequent itemsets and associations such as (meat, milk) instead of (beef, milk), (chicken, milk), etc. Second, compact representations of frequent itemsets and strong rules, whose result size is exponentially smaller, can solve a standard dilemma in mining patterns: with small threshold values for support and confidence, the user is overwhelmed by the extraordinary number of identified patterns and associations; but with large threshold values, some interesting patterns and associations fail to be identified. Our algorithms can also expand those max frequent g-itemsets and essential g-rules into the much larger set of ordinary frequent g-itemsets and strong g-rules. While that expansion is not recommended in most practical cases, we do so in order to present a comparison with existing algorithms that only handle ordinary frequent g-itemsets. In this case, the new algorithm is shown to be thousands, and in some cases millions, of the time faster than previous algorithms. Further, the new algorithm succeeds in analyzing deeper taxonomies, with the depths of seven or more. Experimental results for previous algorithms limited themselves to taxonomies with depth at most three or four. In each of the two problems, a straightforward lattice-based approach is briefly discussed and then a classificationbased algorithm is developed. In particular, the two classification-based algorithms are MFGI_class for mining max frequent g-itemsets and EGR_class for mining essential g