In this paper, a cardinality compensation method based on Information-weighted Consensus Filter(ICF) using data clustering is proposed in order to accurately estimate the cardinality of the Cardinalized Probability Hy...In this paper, a cardinality compensation method based on Information-weighted Consensus Filter(ICF) using data clustering is proposed in order to accurately estimate the cardinality of the Cardinalized Probability Hypothesis Density(CPHD) filter. Although the joint propagation of the intensity and the cardinality distribution in the CPHD filter process allows for more reliable estimation of the cardinality(target number) than the PHD filter, tracking loss may occur when noise and clutter are high in the measurements in a practical situation. For that reason, the cardinality compensation process is included in the CPHD filter, which is based on information fusion step using estimated cardinality obtained from the CPHD filter and measured cardinality obtained through data clustering. Here, the ICF is used for information fusion. To verify the performance of the proposed method, simulations were carried out and it was confirmed that the tracking performance of the multi-target was improved because the cardinality was estimated more accurately as compared to the existing techniques.展开更多
An edge coloring of hypergraph H is a function such that holds for any pair of intersecting edges . The minimum number of colors in edge colorings of H is called the chromatic index of H and is ...An edge coloring of hypergraph H is a function such that holds for any pair of intersecting edges . The minimum number of colors in edge colorings of H is called the chromatic index of H and is denoted by . Erdös, Faber and Lovász proposed a famous conjecture that holds for any loopless linear hypergraph H with n vertices. In this paper, we show that is true for gap-restricted hypergraphs. Our result extends a result of Alesandroni in 2021.展开更多
Although the popular database systems perform well on query optimization,they still face poor query execution plans when the join operations across multiple tables are complex.Bad execution planning usually results in...Although the popular database systems perform well on query optimization,they still face poor query execution plans when the join operations across multiple tables are complex.Bad execution planning usually results in bad cardinality estimations.The cardinality estimation models in traditional databases cannot provide high-quality estimation,because they are not capable of capturing the correlation between multiple tables in an effective fashion.Recently,the state-of-the-art learning-based cardinality estimation is estimated to work better than the traditional empirical methods.Basically,they used deep neural networks to compute the relationships and correlations of tables.In this paper,we propose a vertical scanning convolutional neural network(abbreviated as VSCNN)to capture the relationships between words in the word vector in order to generate a feature map.The proposed learning-based cardinality estimator converts Structured Query Language(SQL)queries from a sentence to a word vector and we encode table names in the one-hot encoding method and the samples into bitmaps,separately,and then merge them to obtain enough semantic information from data samples.In particular,the feature map obtained by VSCNN contains semantic information including tables,joins,and predicates about SQL queries.Importantly,in order to improve the accuracy of cardinality estimation,we propose the negative sampling method for training the word vector by gradient descent from the base table and compress it into a bitmap.Extensive experiments are conducted and the results show that the estimation quality of q-error of the proposed vertical scanning convolutional neural network based model is reduced by at least 14.6%when compared with the estimators in traditional databases.展开更多
计算机技术和通信技术的共同发展,使得数据呈现指数大爆炸式的增长。数据中蕴含的巨大价值是有目共睹的。但是对数据集的肆意收集与分析,使用户的隐私数据处在被泄露的风险中。为保护用户的敏感数据的同时实现对基数查询的有效响应,提...计算机技术和通信技术的共同发展,使得数据呈现指数大爆炸式的增长。数据中蕴含的巨大价值是有目共睹的。但是对数据集的肆意收集与分析,使用户的隐私数据处在被泄露的风险中。为保护用户的敏感数据的同时实现对基数查询的有效响应,提出一种基于差分隐私的隐私保护算法BFRRCE(Bloom Filter Random Response for Cardinality Estimation)。首先对用户的数据利用Bloom Filter数据结构进行数据预处理,然后利用本地差分隐私的扰动算法对数据进行扰动,达到保护用户敏感数据的目的。展开更多
Submodular optimization is widely used in large datasets.In order to speed up the problems solving,it is essential to design low-adaptive algorithms to achieve acceleration in parallel.In general,the function values a...Submodular optimization is widely used in large datasets.In order to speed up the problems solving,it is essential to design low-adaptive algorithms to achieve acceleration in parallel.In general,the function values are given by a value oracle,but in practice,the oracle queries may consume a lot of time.Hence,how to strike a balance between optimizing them is important.In this paper,we focus on maximizing a normalized and strictly monotone set function with the diminishing-return ratio under a cardinality constraint,and propose two algorithms to deal with it.We apply the adaptive sequencing technique to devise the first algorithm,whose approximation ratio is arbitrarily close to 1-e^(-γ)in O(logn·log(log k/γ)) adaptive rounds,and requires O(logn^(2)·log(log k/γ)) queries.Then by adding preprocessing and parameter estimation steps to the first algorithm,we get the second one.The second algorithm trades a small sacrifice in adaptive complexity for a significant improvement in query complexity.With the same approximation and adaptive complexity,the query complexity is improved to.To the best of our knowledge,this is the first paper of designing adaptive algorithms for maximizing a monotone function using the diminishing-return ratio.展开更多
The emergence of digital networks and the wide adoption of information on internet platforms have given rise to threats against users’private information.Many intruders actively seek such private data either for sale...The emergence of digital networks and the wide adoption of information on internet platforms have given rise to threats against users’private information.Many intruders actively seek such private data either for sale or other inappropriate purposes.Similarly,national and international organizations have country-level and company-level private information that could be accessed by different network attacks.Therefore,the need for a Network Intruder Detection System(NIDS)becomes essential for protecting these networks and organizations.In the evolution of NIDS,Artificial Intelligence(AI)assisted tools and methods have been widely adopted to provide effective solutions.However,the development of NIDS still faces challenges at the dataset and machine learning levels,such as large deviations in numeric features,the presence of numerous irrelevant categorical features resulting in reduced cardinality,and class imbalance in multiclass-level data.To address these challenges and offer a unified solution to NIDS development,this study proposes a novel framework that preprocesses datasets and applies a box-cox transformation to linearly transform the numeric features and bring them into closer alignment.Cardinality reduction was applied to categorical features through the binning method.Subsequently,the class imbalance dataset was addressed using the adaptive synthetic sampling data generation method.Finally,the preprocessed,refined,and oversampled feature set was divided into training and test sets with an 80–20 ratio,and two experiments were conducted.In Experiment 1,the binary classification was executed using four machine learning classifiers,with the extra trees classifier achieving the highest accuracy of 97.23%and an AUC of 0.9961.In Experiment 2,multiclass classification was performed,and the extra trees classifier emerged as the most effective,achieving an accuracy of 81.27%and an AUC of 0.97.The results were evaluated based on training,testing,and total time,and a comparative analysis with state-of-the-art stud展开更多
Optimization problem of cardinality constrained mean-variance(CCMV)model for sparse portfolio selection is considered.To overcome the difficulties caused by cardinality constraint,an exact penalty approach is employed...Optimization problem of cardinality constrained mean-variance(CCMV)model for sparse portfolio selection is considered.To overcome the difficulties caused by cardinality constraint,an exact penalty approach is employed,then CCMV problem is transferred into a difference-of-convex-functions(DC)problem.By exploiting the DC structure of the gained problem and the superlinear convergence of semismooth Newton(ssN)method,an inexact proximal DC algorithm with sieving strategy based on a majorized ssN method(siPDCA-mssN)is proposed.For solving the inner problems of siPDCA-mssN from dual,the second-order information is wisely incorporated and an efficient mssN method is employed.The global convergence of the sequence generated by siPDCA-mssN is proved.To solve large-scale CCMV problem,a decomposed siPDCA-mssN(DsiPDCA-mssN)is introduced.To demonstrate the efficiency of proposed algorithms,siPDCA-mssN and DsiPDCA-mssN are compared with the penalty proximal alternating linearized minimization method and the CPLEX(12.9)solver by performing numerical experiments on realword market data and large-scale simulated data.The numerical results demonstrate that siPDCA-mssN and DsiPDCA-mssN outperform the other methods from computation time and optimal value.The out-of-sample experiments results display that the solutions of CCMV model are better than those of other portfolio selection models in terms of Sharp ratio and sparsity.展开更多
Mathematical programming problems with semi-continuous variables and cardinality constraint have many applications,including production planning,portfolio selection,compressed sensing and subset selection in regressio...Mathematical programming problems with semi-continuous variables and cardinality constraint have many applications,including production planning,portfolio selection,compressed sensing and subset selection in regression.This class of problems can be modeled as mixed-integer programs with special structures and are in general NP-hard.In the past few years,based on new reformulations,approximation and relaxation techniques,promising exact and approximate methods have been developed.We survey in this paper these recent developments for this challenging class of mathematical programming problems.展开更多
The Cardinality Constraint-Based Optimization problem is investigated in this note. In portfolio optimization problem, the cardinality constraint allows one to invest in assets out of a universe of N assets for a pres...The Cardinality Constraint-Based Optimization problem is investigated in this note. In portfolio optimization problem, the cardinality constraint allows one to invest in assets out of a universe of N assets for a prespecified value of K. It is generally agreed that choosing a “small” value of K forces the implementation of diversification in small portfolios. However, the question of how small must be K has remained unanswered. In the present work, using a comparative approach we show computationally that optimal portfolio selection with a relatively small or large number of assets, K, may produce similar results with differentiated reliabilities.展开更多
It has been shown by Sierpinski that a compact, Hausdorff, connected topological space (otherwise known as a continuum) cannot be decomposed into either a finite number of two or more disjoint, nonempty, closed sets o...It has been shown by Sierpinski that a compact, Hausdorff, connected topological space (otherwise known as a continuum) cannot be decomposed into either a finite number of two or more disjoint, nonempty, closed sets or a countably infinite family of such sets. In particular, for a closed interval of the real line endowed with the usual topology, we see that we cannot partition it into a countably infinite number of disjoint, nonempty closed sets. On the positive side, however, one can certainly express such an interval as a union of c disjoint closed sets, where c is the cardinality of the real line. For example, a closed interval is surely the union of its points, each set consisting of a single point being closed. Surprisingly enough, except for a set of Lebesgue measure 0, these closed sets can be chosen to be perfect sets, i.e., closed sets every point of which is an accumulation point. They even turn out to be nowhere dense (containing no intervals). Such nowhere dense, perfect sets are sometimes called Cantor sets.展开更多
THE B-spline wavelets N<sub>m</sub> (x) play an important role in the approximation theory sincethey are piecewise polynomials so that they may be used to fit a function and its derivatives atthe integer...THE B-spline wavelets N<sub>m</sub> (x) play an important role in the approximation theory sincethey are piecewise polynomials so that they may be used to fit a function and its derivatives atthe integers. However, except for m=1, 2, they do not have the interpolating property (i.e. N<sub>m</sub>(n) ≠δ<sub>n0</sub>. In this work, we shall construct a family of scaling functions of fatherwavelets which have the interpolating property and whose Fourier transforms are spline func-tions. As usual, these functions can be orthogonalized by Meyer’s technique and the corre-sponding mother wavelets ψ may be found by the展开更多
In the present work, a construction making possible creation of an additive channel of cardinality s and rank r for arbitrary integers s, r, n (r≤min (n,s-1)), as well as creation of a code correcting err...In the present work, a construction making possible creation of an additive channel of cardinality s and rank r for arbitrary integers s, r, n (r≤min (n,s-1)), as well as creation of a code correcting errors of the channel A is presented.展开更多
Counting the cardinality of flows for massive high-speed traffic over sliding windows is still a challenging work under time and space constrains, but plays a key role in many network applications, such as traffic man...Counting the cardinality of flows for massive high-speed traffic over sliding windows is still a challenging work under time and space constrains, but plays a key role in many network applications, such as traffic management and routing optimization in software defined network. In this pa- per, we propose a novel data structure (called LRU-Sketch) to address the problem. The significant contributions are as follows. 1) The proposed data structure adapts a well-known probabilistic sketch to sliding window model; 2) By using the least-recently used (LRU) replacement policy, we design a highly time-efficient algorithm for timely forgetting stale information, which takes constant (O(1)) time per time slot; 3) Moreover, a further memory-reducing schema is given at a cost of very little loss of accuracy; 4) Comprehensive ex- periments, performed on two real IP trace files, confirm that the proposed schema attains high accuracy and high time efficiency.ferences including IEEE TPDS, ACM ToS, JCST, MIDDLEWARE, CLUSTER, NAS, etc. Currently, his research interests include big data management, cloud storage, and distributed file systems.展开更多
In modern society,it is necessary to perform some secure computations for private sets between different entities.For instance,two merchants desire to calculate the number of common customers and the total number of u...In modern society,it is necessary to perform some secure computations for private sets between different entities.For instance,two merchants desire to calculate the number of common customers and the total number of users without disclosing their own privacy.In order to solve the referred problem,a semi-quantum protocol for private computation of cardinalities of set based on Greenberger-Horne-Zeilinger(GHZ)states is proposed for the first time in this paper,where all the parties just perform single-particle measurement if necessary.With the assistance of semi-honest third party(TP),two semi-quantum participants can simultaneously obtain intersection cardinality and union cardinality.Furthermore,security analysis shows that the presented protocol can stand against some well-known quantum attacks,such as intercept measure resend attack,entangle measure attack.Compared with the existing quantum protocols of Private Set Intersection Cardinality(PSI-CA)and Private Set Union Cardinality(PSU-CA),the complicated oracle operations and powerful quantum capacities are not required in the proposed protocol.Therefore,it seems more appropriate to implement this protocol with current technology.展开更多
基金supported by the National GNSS Research Center Program of the Defense Acquisition Program Administration and Agency for Defense Developmentthe Ministry of Science and ICT of the Republic of Korea through the Space Core Technology Development Program (No. NRF2018M1A3A3A02065722)
文摘In this paper, a cardinality compensation method based on Information-weighted Consensus Filter(ICF) using data clustering is proposed in order to accurately estimate the cardinality of the Cardinalized Probability Hypothesis Density(CPHD) filter. Although the joint propagation of the intensity and the cardinality distribution in the CPHD filter process allows for more reliable estimation of the cardinality(target number) than the PHD filter, tracking loss may occur when noise and clutter are high in the measurements in a practical situation. For that reason, the cardinality compensation process is included in the CPHD filter, which is based on information fusion step using estimated cardinality obtained from the CPHD filter and measured cardinality obtained through data clustering. Here, the ICF is used for information fusion. To verify the performance of the proposed method, simulations were carried out and it was confirmed that the tracking performance of the multi-target was improved because the cardinality was estimated more accurately as compared to the existing techniques.
文摘An edge coloring of hypergraph H is a function such that holds for any pair of intersecting edges . The minimum number of colors in edge colorings of H is called the chromatic index of H and is denoted by . Erdös, Faber and Lovász proposed a famous conjecture that holds for any loopless linear hypergraph H with n vertices. In this paper, we show that is true for gap-restricted hypergraphs. Our result extends a result of Alesandroni in 2021.
基金the CCF-Huawei Database System Innovation Research Plan under Grant No.CCF-HuaweiDBIR2020004Athe National Natural Science Foundation of China under Grant Nos.61772091,61802035,61962006 and 61962038+1 种基金the Sichuan Science and Technology Program under Grant Nos.2021JDJQ0021 and 2020YJ0481the Digital Media Art,Key Laboratory of Sichuan Province,Sichuan Conservatory of Music,Chengdu,China under Grant No.21DMAKL02.
文摘Although the popular database systems perform well on query optimization,they still face poor query execution plans when the join operations across multiple tables are complex.Bad execution planning usually results in bad cardinality estimations.The cardinality estimation models in traditional databases cannot provide high-quality estimation,because they are not capable of capturing the correlation between multiple tables in an effective fashion.Recently,the state-of-the-art learning-based cardinality estimation is estimated to work better than the traditional empirical methods.Basically,they used deep neural networks to compute the relationships and correlations of tables.In this paper,we propose a vertical scanning convolutional neural network(abbreviated as VSCNN)to capture the relationships between words in the word vector in order to generate a feature map.The proposed learning-based cardinality estimator converts Structured Query Language(SQL)queries from a sentence to a word vector and we encode table names in the one-hot encoding method and the samples into bitmaps,separately,and then merge them to obtain enough semantic information from data samples.In particular,the feature map obtained by VSCNN contains semantic information including tables,joins,and predicates about SQL queries.Importantly,in order to improve the accuracy of cardinality estimation,we propose the negative sampling method for training the word vector by gradient descent from the base table and compress it into a bitmap.Extensive experiments are conducted and the results show that the estimation quality of q-error of the proposed vertical scanning convolutional neural network based model is reduced by at least 14.6%when compared with the estimators in traditional databases.
文摘计算机技术和通信技术的共同发展,使得数据呈现指数大爆炸式的增长。数据中蕴含的巨大价值是有目共睹的。但是对数据集的肆意收集与分析,使用户的隐私数据处在被泄露的风险中。为保护用户的敏感数据的同时实现对基数查询的有效响应,提出一种基于差分隐私的隐私保护算法BFRRCE(Bloom Filter Random Response for Cardinality Estimation)。首先对用户的数据利用Bloom Filter数据结构进行数据预处理,然后利用本地差分隐私的扰动算法对数据进行扰动,达到保护用户敏感数据的目的。
基金the National Natural Science Foundation of China(Nos.11971447 and 11871442)the Fundamental Research Funds for the Central Universities.
文摘Submodular optimization is widely used in large datasets.In order to speed up the problems solving,it is essential to design low-adaptive algorithms to achieve acceleration in parallel.In general,the function values are given by a value oracle,but in practice,the oracle queries may consume a lot of time.Hence,how to strike a balance between optimizing them is important.In this paper,we focus on maximizing a normalized and strictly monotone set function with the diminishing-return ratio under a cardinality constraint,and propose two algorithms to deal with it.We apply the adaptive sequencing technique to devise the first algorithm,whose approximation ratio is arbitrarily close to 1-e^(-γ)in O(logn·log(log k/γ)) adaptive rounds,and requires O(logn^(2)·log(log k/γ)) queries.Then by adding preprocessing and parameter estimation steps to the first algorithm,we get the second one.The second algorithm trades a small sacrifice in adaptive complexity for a significant improvement in query complexity.With the same approximation and adaptive complexity,the query complexity is improved to.To the best of our knowledge,this is the first paper of designing adaptive algorithms for maximizing a monotone function using the diminishing-return ratio.
文摘The emergence of digital networks and the wide adoption of information on internet platforms have given rise to threats against users’private information.Many intruders actively seek such private data either for sale or other inappropriate purposes.Similarly,national and international organizations have country-level and company-level private information that could be accessed by different network attacks.Therefore,the need for a Network Intruder Detection System(NIDS)becomes essential for protecting these networks and organizations.In the evolution of NIDS,Artificial Intelligence(AI)assisted tools and methods have been widely adopted to provide effective solutions.However,the development of NIDS still faces challenges at the dataset and machine learning levels,such as large deviations in numeric features,the presence of numerous irrelevant categorical features resulting in reduced cardinality,and class imbalance in multiclass-level data.To address these challenges and offer a unified solution to NIDS development,this study proposes a novel framework that preprocesses datasets and applies a box-cox transformation to linearly transform the numeric features and bring them into closer alignment.Cardinality reduction was applied to categorical features through the binning method.Subsequently,the class imbalance dataset was addressed using the adaptive synthetic sampling data generation method.Finally,the preprocessed,refined,and oversampled feature set was divided into training and test sets with an 80–20 ratio,and two experiments were conducted.In Experiment 1,the binary classification was executed using four machine learning classifiers,with the extra trees classifier achieving the highest accuracy of 97.23%and an AUC of 0.9961.In Experiment 2,multiclass classification was performed,and the extra trees classifier emerged as the most effective,achieving an accuracy of 81.27%and an AUC of 0.97.The results were evaluated based on training,testing,and total time,and a comparative analysis with state-of-the-art stud
基金supported by the National Natural Science Foundation of China(Grant No.11971092)supported by the Fundamental Research Funds for the Central Universities(Grant No.DUT20RC(3)079)。
文摘Optimization problem of cardinality constrained mean-variance(CCMV)model for sparse portfolio selection is considered.To overcome the difficulties caused by cardinality constraint,an exact penalty approach is employed,then CCMV problem is transferred into a difference-of-convex-functions(DC)problem.By exploiting the DC structure of the gained problem and the superlinear convergence of semismooth Newton(ssN)method,an inexact proximal DC algorithm with sieving strategy based on a majorized ssN method(siPDCA-mssN)is proposed.For solving the inner problems of siPDCA-mssN from dual,the second-order information is wisely incorporated and an efficient mssN method is employed.The global convergence of the sequence generated by siPDCA-mssN is proved.To solve large-scale CCMV problem,a decomposed siPDCA-mssN(DsiPDCA-mssN)is introduced.To demonstrate the efficiency of proposed algorithms,siPDCA-mssN and DsiPDCA-mssN are compared with the penalty proximal alternating linearized minimization method and the CPLEX(12.9)solver by performing numerical experiments on realword market data and large-scale simulated data.The numerical results demonstrate that siPDCA-mssN and DsiPDCA-mssN outperform the other methods from computation time and optimal value.The out-of-sample experiments results display that the solutions of CCMV model are better than those of other portfolio selection models in terms of Sharp ratio and sparsity.
基金supported by the National Natural Science Foundation of China grants(Nos.11101092,10971034)the Joint National Natural Science Foundation of China/Research Grants Council of Hong Kong grant(No.71061160506)the Research Grants Council of Hong Kong grants(Nos.CUHK414808 and CUHK414610).
文摘Mathematical programming problems with semi-continuous variables and cardinality constraint have many applications,including production planning,portfolio selection,compressed sensing and subset selection in regression.This class of problems can be modeled as mixed-integer programs with special structures and are in general NP-hard.In the past few years,based on new reformulations,approximation and relaxation techniques,promising exact and approximate methods have been developed.We survey in this paper these recent developments for this challenging class of mathematical programming problems.
文摘The Cardinality Constraint-Based Optimization problem is investigated in this note. In portfolio optimization problem, the cardinality constraint allows one to invest in assets out of a universe of N assets for a prespecified value of K. It is generally agreed that choosing a “small” value of K forces the implementation of diversification in small portfolios. However, the question of how small must be K has remained unanswered. In the present work, using a comparative approach we show computationally that optimal portfolio selection with a relatively small or large number of assets, K, may produce similar results with differentiated reliabilities.
文摘It has been shown by Sierpinski that a compact, Hausdorff, connected topological space (otherwise known as a continuum) cannot be decomposed into either a finite number of two or more disjoint, nonempty, closed sets or a countably infinite family of such sets. In particular, for a closed interval of the real line endowed with the usual topology, we see that we cannot partition it into a countably infinite number of disjoint, nonempty closed sets. On the positive side, however, one can certainly express such an interval as a union of c disjoint closed sets, where c is the cardinality of the real line. For example, a closed interval is surely the union of its points, each set consisting of a single point being closed. Surprisingly enough, except for a set of Lebesgue measure 0, these closed sets can be chosen to be perfect sets, i.e., closed sets every point of which is an accumulation point. They even turn out to be nowhere dense (containing no intervals). Such nowhere dense, perfect sets are sometimes called Cantor sets.
文摘THE B-spline wavelets N<sub>m</sub> (x) play an important role in the approximation theory sincethey are piecewise polynomials so that they may be used to fit a function and its derivatives atthe integers. However, except for m=1, 2, they do not have the interpolating property (i.e. N<sub>m</sub>(n) ≠δ<sub>n0</sub>. In this work, we shall construct a family of scaling functions of fatherwavelets which have the interpolating property and whose Fourier transforms are spline func-tions. As usual, these functions can be orthogonalized by Meyer’s technique and the corre-sponding mother wavelets ψ may be found by the
文摘In the present work, a construction making possible creation of an additive channel of cardinality s and rank r for arbitrary integers s, r, n (r≤min (n,s-1)), as well as creation of a code correcting errors of the channel A is presented.
基金This work was supported by the National High Tech- nology Research and Development Program of China (2012AA01A510 and 2012AA01AS09), and partially supported by the National Natural Science Foundation of China (NSFC) (Grant Nos. 61402518, 61403060), and the Jiangsu Province Science Foundation for Youths (BK20150722).
文摘Counting the cardinality of flows for massive high-speed traffic over sliding windows is still a challenging work under time and space constrains, but plays a key role in many network applications, such as traffic management and routing optimization in software defined network. In this pa- per, we propose a novel data structure (called LRU-Sketch) to address the problem. The significant contributions are as follows. 1) The proposed data structure adapts a well-known probabilistic sketch to sliding window model; 2) By using the least-recently used (LRU) replacement policy, we design a highly time-efficient algorithm for timely forgetting stale information, which takes constant (O(1)) time per time slot; 3) Moreover, a further memory-reducing schema is given at a cost of very little loss of accuracy; 4) Comprehensive ex- periments, performed on two real IP trace files, confirm that the proposed schema attains high accuracy and high time efficiency.ferences including IEEE TPDS, ACM ToS, JCST, MIDDLEWARE, CLUSTER, NAS, etc. Currently, his research interests include big data management, cloud storage, and distributed file systems.
基金supported by the National Natural Science Foundation of China(61802118)Natural Science Foundation of Heilongjiang Province(YQ2020F013)supported by the Advanced Programs of Heilongjiang Province for the Overseas Scholars and the Outstanding Youth Fund of Heilongjiang University and the Heilongjiang University Innovation Fund(YJSCX2022-247HLJU)
文摘In modern society,it is necessary to perform some secure computations for private sets between different entities.For instance,two merchants desire to calculate the number of common customers and the total number of users without disclosing their own privacy.In order to solve the referred problem,a semi-quantum protocol for private computation of cardinalities of set based on Greenberger-Horne-Zeilinger(GHZ)states is proposed for the first time in this paper,where all the parties just perform single-particle measurement if necessary.With the assistance of semi-honest third party(TP),two semi-quantum participants can simultaneously obtain intersection cardinality and union cardinality.Furthermore,security analysis shows that the presented protocol can stand against some well-known quantum attacks,such as intercept measure resend attack,entangle measure attack.Compared with the existing quantum protocols of Private Set Intersection Cardinality(PSI-CA)and Private Set Union Cardinality(PSU-CA),the complicated oracle operations and powerful quantum capacities are not required in the proposed protocol.Therefore,it seems more appropriate to implement this protocol with current technology.