The Learning With Errors(LWE)problem is widely used in lattice-based cryptography,which is the most promising post-quantum cryptography direction.There are a variety of LWE-solving methods,which can be classified into...The Learning With Errors(LWE)problem is widely used in lattice-based cryptography,which is the most promising post-quantum cryptography direction.There are a variety of LWE-solving methods,which can be classified into four groups:lattice methods,algebraic methods,combinatorial methods,and exhaustive searching.The Blum–Kalai–Wasserman(BKW)algorithm is an important variety of combinatorial algorithms,which was first presented for solving the Learning Parity With Noise(LPN)problem and then extended to solve LWE.In this paper,we give an overview of BKW algorithms for solving LWE.We introduce the framework and key techniques of BKW algorithms and make comparisons between different BKW algorithms and also with lattice methods by estimating concrete security of specific LWE instances.We also briefly discuss the current problems and potential future directions of BKW algorithms.展开更多
Locating bug code snippets(short for BugCode)has been a complex problem throughout the history of software security,mainly because the constraints that define BugCode are obscure and hard to summarize.Previously,secur...Locating bug code snippets(short for BugCode)has been a complex problem throughout the history of software security,mainly because the constraints that define BugCode are obscure and hard to summarize.Previously,security analysts attempted to define such constraints manually(e.g.,limiting buffer size to detect overflow),but were limited to the types of BugCode.Recent researchers address this problem by extracting constraints from program documentation,which shows the potential for API misuse.But for bugs beyond the scope of API misuse,such an approach becomes less effective since the corresponding constraints are not defined in documents,not to mention the programs without documentation In this paper,inspired by the fact that expert programmers often correct the BugCode on open forums such as StackOverflow,we design an approach to automatically extract knowledge from StackOverflow and leverage it to detect BugCode.As we all know,the contexts in StackOverflow come from ordinary developers.Their writing tends to be loosely organized and in various styles,which are more challenging to analyze than program documentation.To address the challenges,we design a custom tokenization approach to segment sentences and employ sentiment analysis to find the Controversial Sentences(CSs)that typically contain the constraints we need for code analysis.Then we use constituency parsing to extract knowledge from CSs,which helps locate Bug-Code.We evaluated our system on 41,144 comments from the questions tagged with Java and Android.The results show that our approach achieves 95.5%precision in discovering CSs.We have discovered 276 pieces of BugCode proved to be true through manual validation including an assigned CVE.89.3%of the discovered bugs remained in the current version of answers,which are unknown to users.展开更多
Clone detection has received much attention in many fields such as malicious code detection,vulnerability hunting,and code copyright infringement detection.However,cyber criminals may obfuscate code to impede violatio...Clone detection has received much attention in many fields such as malicious code detection,vulnerability hunting,and code copyright infringement detection.However,cyber criminals may obfuscate code to impede violation detection.To date,few studies have investigated the robustness of clone detectors,especially in-fashion deep learning-based ones,against obfuscation.Meanwhile,most of these studies only measure the difference between one code snippet and its obfuscation version.However,in reality,the attackers may modify the original code before obfuscating it.Then what we should evaluate is the detection of obfuscated code from cloned code,not the original code.For this,we conduct a comprehensive study evaluating 3 popular deep-learning based clone detectors and 6 commonly used traditional ones.Regarding the data,we collect 6512 clone pairs of five types from the dataset BigCloneBench and obfuscate one program of each pair via 64 strategies of 6 state-of-art commercial obfuscators.We also collect 1424 non-clone pairs to evaluate the false positives.In sum,a benchmark of 524,148 code pairs(either clone or not)are generated,which are passed to clone detectors for evaluation.To automate the evaluation,we develop one uniform evaluation framework,integrating the clone detectors and obfuscators.The results bring us interesting findings on how obfuscation affects the performance of clone detection and what is the difference between traditional and deep learning-based clone detectors.In addition,we conduct manual code reviews to uncover the root cause of the phenomenon and give suggestions to users from different perspectives.展开更多
Virtual personal assistants(VPAs),such as Amazon Alexa and Google Assistant,are software agents designed to perform tasks or provide services to individuals in response to user commands.VPAs extend their functions thr...Virtual personal assistants(VPAs),such as Amazon Alexa and Google Assistant,are software agents designed to perform tasks or provide services to individuals in response to user commands.VPAs extend their functions through third-party voice apps,thereby attracting more users to use VPA-equipped products.Previous studies demonstrate vulnerabilities in the certification,installation,and usage of these third-party voice apps.However,these studies focus on individual apps.To the best of our knowledge,there is no prior research that explores the correlations among voice apps.Voice apps represent a new type of applications that interact with users mainly through a voice user interface instead of a graphical user interface,requiring a distinct approach to analysis.In this study,we present a novel voice app similarity analysis approach to analyze voice apps in the market from a new perspective.Our approach,called SkillSim,detects similarities among voice apps(i.e.skills)based on two dimensions:text similarity and structure similarity.SkillSim measures 30,000 voice apps in the Amazon skill market and reveals that more than 25.9%have at least one other skill with a text similarity greater than 70%.Our analysis identifies several factors that contribute to a high number of similar skills,including the assistant development platforms and their limited templates.Additionally,we observe interesting phenomena,such as developers or platforms creating multiple similar skills with different accounts for purposes such as advertising.Furthermore,we also find that some assistant development platforms develop multiple similar but non-compliant skills,such as requesting user privacy in a non-compliance way,which poses a security risk.Based on the similarity analysis results,we have a deeper understanding of voice apps in the mainstream market.展开更多
This paper proposes a general method to construct 1-resilient Boolean functions by modifying the Tu-Deng and Tang-Carlet-Tang functions. Cryptographic properties such as algebraic degree, nonlinearity and algebraic im...This paper proposes a general method to construct 1-resilient Boolean functions by modifying the Tu-Deng and Tang-Carlet-Tang functions. Cryptographic properties such as algebraic degree, nonlinearity and algebraic immunity are also considered. A sufficient condition of the modified func- tions with optimal algebraic degree in terms of the Siegenthaler bound is proposed. The authors obtain a lower bound on the nonlinearity of the Tang-Carlet-Tang functions, which is slightly better than the known result. If the authors do not break the "continuity" of the support and zero sets, the functions constructed in this paper have suboptimal algebraic immunity. Finally, four specific classes of 1-resilient Boolean functions constructed from this construction and with the mentioned good cryptographic properties are proposed. Experimental results show that there are many 1-resilient Boolean functions have higher nonlinearities than known l-resilient functions modified by Tu-Deng and Tang- Carlet-Tang functions.展开更多
Weispfenning in 1992 introduced the concepts of comprehensive Gr?bner system/basis of a parametric polynomial system, and he also presented an algorithm to compute them. Since then,this research ?eld has attracted muc...Weispfenning in 1992 introduced the concepts of comprehensive Gr?bner system/basis of a parametric polynomial system, and he also presented an algorithm to compute them. Since then,this research ?eld has attracted much attention over the past several decades, and many effcient algorithms have been proposed. Moreover, these algorithms have been applied to many different ?elds,such as parametric polynomial equations solving, geometric theorem proving and discovering, quanti?er elimination, and so on. This survey brings together the works published between 1992 and 2018, and we hope that this survey is valuable for this research area.展开更多
Some techniques using linear algebra was introduced by Faugore in F4 to speed up the reduction process during Grobner basis computations. These techniques can also be used in fast implementations of F5 and some other ...Some techniques using linear algebra was introduced by Faugore in F4 to speed up the reduction process during Grobner basis computations. These techniques can also be used in fast implementations of F5 and some other signature-based Grobner basis algorithms. When these techniques are applied, a very important step is constructing matrices from critical pairs and existing polynomials by the Symbolic Preprocessing function (given in F4). Since multiplications of monomials and polynomials are involved in the Symbolic Preprocessing function, this step can be very costly when the number of involved polynomials/monomials is huge. In this paper, multiplications of monomials and polynomials for a Boolean polynomial ring are investigated and a specific method of implementing the Symbolic Preprocessing function over Boolean polynomial rings is reported. Many examples have been tested by using this method, and the experimental data shows that the new method is very efficient.展开更多
Telecommunication fraud has continuously been causing severe financial loss to telecommunication customers in China for several years.Traditional approaches to detect telecommunication frauds usually rely on construct...Telecommunication fraud has continuously been causing severe financial loss to telecommunication customers in China for several years.Traditional approaches to detect telecommunication frauds usually rely on constructing a blacklist of fraud telephone numbers.However,attackers can simply evade such detection by changing their numbers,which is very easy to achieve through VoIP(Voice over IP).To solve this problem,we detect telecommunication frauds from the contents of a call instead of simply through the caller’s telephone number.Particularly,we collect descriptions of telecommunication fraud from news reports and social media.We use machine learning algorithms to analyze data and to select the high-quality descriptions from the data collected previously to construct datasets.Then we leverage natural language processing to extract features from the textual data.After that,we build rules to identify similar contents within the same call for further telecommunication fraud detection.To achieve online detection of telecommunication frauds,we develop an Android application which can be installed on a customer’s smartphone.When an incoming fraud call is answered,the application can dynamically analyze the contents of the call in order to identify frauds.Our results show that we can protect customers effectively.展开更多
An efficient algorithm is proposed for factoring polynomials over an algebraic extension field defined by a polynomial ring modulo a maximal ideal. If the maximal ideal is given by its CrSbner basis, no extra Grbbner ...An efficient algorithm is proposed for factoring polynomials over an algebraic extension field defined by a polynomial ring modulo a maximal ideal. If the maximal ideal is given by its CrSbner basis, no extra Grbbner basis computation is needed for factoring a polynomial over this extension field. Nothing more than linear algebraic technique is used to get a characteristic polynomial of a generic linear map. Then this polynomial is factorized over the ground field. From its factors, the factorization of the polynomial over the extension field is obtained. The algorithm has been implemented in Magma and computer experiments indicate that it is very efficient, particularly for complicated examples.展开更多
A precise representation for attacks can benefit the detection of malware in both accuracy and efficiency.However,it is still far from expectation to describe attacks precisely on the Android platform.In addition,new ...A precise representation for attacks can benefit the detection of malware in both accuracy and efficiency.However,it is still far from expectation to describe attacks precisely on the Android platform.In addition,new features on Android,such as communication mechanisms,introduce new challenges and difficulties for attack detection.In this paper,we propose abstract attack models to precisely capture the semantics of various Android attacks,which include the corresponding targets,involved behaviors as well as their execution dependency.Meanwhile,we construct a novel graph-based model called the inter-component communication graph(ICCG)to describe the internal control flows and inter-component communications of applications.The models take into account more communication channel with a maximized preservation of their program logics.With the guidance of the attack models,we propose a static searching approach to detect attacks hidden in ICCG.To reduce false positive rate,we introduce an additional dynamic confirmation step to check whether the detected attacks are false alarms.Experiments show that DROIDECHO can detect attacks in both benchmark and real-world applications effectively and efficiently with a precision of 89.5%.展开更多
In this paper, we first review the existing proofs of the Boneh-Franklin identity-based encryption scheme (BF-IBE for short), and show how to admit a new proof by slightly modifying the specifications of the hash func...In this paper, we first review the existing proofs of the Boneh-Franklin identity-based encryption scheme (BF-IBE for short), and show how to admit a new proof by slightly modifying the specifications of the hash functions of the original BF-IBE. Compared with prior proofs, our new proof provides a tighter security reduction and minimizes the use of random oracles, thus indicates BF-IBE has better provable security with our new choices of hash functions. The techniques developed in our proof can also be applied to improving security analysis of some other IBE schemes. As an independent technical contribution, we also give a rigorous proof of the Fujisaki-Okamoto (FO) transformation in the case of CPA-to-CCA, which demonstrates the efficiency of the FO-transformation (CPA-to-CCA), in terms of the tightness of security reduction, has long been underestimated. This result can remarkably benefit the security proofs of encryption schemes using the FO-transformation for CPA-to-CCA enhancement.展开更多
The GVW algorithm is an effcient signature-based algorithm for computing Gr?bner bases.In this paper, the authors consider the implementation of the GVW algorithm by using linear algebra,and speed up GVW via a substit...The GVW algorithm is an effcient signature-based algorithm for computing Gr?bner bases.In this paper, the authors consider the implementation of the GVW algorithm by using linear algebra,and speed up GVW via a substituting method. As it is well known that, most of the computing time of a Gr?bner basis is spent on reductions of polynomials. Thus, linear algebraic techniques, such as matrix operations, have been used extensively to speed up the implementations. Particularly, one-direction(also called signature-safe) reduction is used in signature-based algorithms, because polynomials(or rows in matrices) with larger signatures can only be reduced by polynomials(rows) with smaller signatures. The authors propose a new method to construct sparser matrices for signature-based algorithms via a substituting method. Speci?cally, instead of only storing the original polynomials in GVW, the authors also record many equivalent but sparser polynomials at the same time. In matrix construction, denser polynomials are substituted by sparser equivalent ones. As the matrices get sparser, they can be eliminated more effciently. Two speci?cal algorithms, Block-GVW and LMGVW, are presented, and their combination is the Sub-GVW algorithm. The correctness of the new proposed method is proved, and the experimental results demonstrate the effciency of this new method.展开更多
This paper studies the problem of constructing lightweight involutory maximal distance separable(MDS)matrices.The authors find the exact lower bound of the XOR counts for 4×4 involutory MDS matrices over F2^4.Fur...This paper studies the problem of constructing lightweight involutory maximal distance separable(MDS)matrices.The authors find the exact lower bound of the XOR counts for 4×4 involutory MDS matrices over F2^4.Further,some new structures of 4×4 involutory MDS matrices over F2^mare provided to construct involutory MDS matrices and the authors constructed the lightest4×4 involutory MDS matrices over F2^8so far by using these structures.展开更多
This paper presents a new algorithm for computing the extended Hensel construction(EHC) of multivariate polynomials in main variable x and sub-variables u1, u2, ···, um over a number field K. This algor...This paper presents a new algorithm for computing the extended Hensel construction(EHC) of multivariate polynomials in main variable x and sub-variables u1, u2, ···, um over a number field K. This algorithm first constructs a set by using the resultant of two initial coprime factors w.r.t. x, and then obtains the Hensel factors by comparing the coefficients of xi on both sides of an equation. Since the Hensel factors are polynomials of the main variable with coefficients in fraction field K(u1, u2, ···, um), the computation cost of handling rational functions can be high. Therefore,the authors use a method which multiplies resultant and removes the denominators of the rational functions. Unlike previously-developed algorithms that use interpolation functions or Grobner basis, the algorithm relies little on polynomial division, and avoids multiplying by different factors when removing the denominators of Hensel factors. All algorithms are implemented using Magma, a computational algebra system and experiments indicate that our algorithm is more efficient.展开更多
Decompilation aims to analyze and transform low-level program language(PL)codes such as binary code or assembly code to obtain an equivalent high-level PL.Decompilation plays a vital role in the cyberspace security fi...Decompilation aims to analyze and transform low-level program language(PL)codes such as binary code or assembly code to obtain an equivalent high-level PL.Decompilation plays a vital role in the cyberspace security fields such as software vulnerability discovery and analysis,malicious code detection and analysis,and software engineering fields such as source code analysis,optimization,and cross-language cross-operating system migration.Unfortunately,the existing decompilers mainly rely on experts to write rules,which leads to bottlenecks such as low scalability,development difficulties,and long cycles.The generated high-level PL codes often violate the code writing specifications.Further,their readability is still relatively low.The problems mentioned above hinder the efficiency of advanced applications(e.g.,vulnerability discovery)based on decompiled high-level PL codes.In this paper,we propose a decompilation approach based on the attention-based neural machine translation(NMT)mechanism,which converts low-level PL into high-level PL while acquiring legibility and keeping functionally similar.To compensate for the information asymmetry between the low-level and high-level PL,a translation method based on basic operations of low-level PL is designed.This method improves the generalization of the NMT model and captures the translation rules between PLs more accurately and efficiently.Besides,we implement a neural decompilation framework called Neutron.The evaluation of two practical applications shows that Neutron’s average program accuracy is 96.96%,which is better than the traditional NMT model.展开更多
In recent years,the widespread applications of open-source software(OSS)have brought great convenience for software developers.However,it is always facing unavoidable security risks,such as open-source code defects an...In recent years,the widespread applications of open-source software(OSS)have brought great convenience for software developers.However,it is always facing unavoidable security risks,such as open-source code defects and security vulnerabilities.To find out the OSS risks in time,we carry out an empirical study to identify the indicators for evaluating the OSS.To achieve a comprehensive understanding of the OSS assessment,we collect 56 papers from prestigious academic venues(such as IEEE Xplore,ACM Digital Library,DBLP,and Google Scholar)in the past 21 years.During the process of the investigation,we first identify the main concerns for selecting OSS and distill five types of commonly used indicators to assess OSS.We then conduct a comparative analysis to discuss how these indicators are used in each surveyed study and their differences.Moreover,we further undertake a correlation analysis between these indicators and uncover 13 confirmed conclusions and four cases with controversy occurring in these studies.Finally,we discuss several possible applications of these conclusions,which are insightful for the research on OSS and software supply chain.展开更多
In this paper,a new method to analyze Boolean functions is proposed.By this method,one can analyze the balancedness,the nonlinearity,and the input-output correlation of vectorial Boolean functions.The basic idea of th...In this paper,a new method to analyze Boolean functions is proposed.By this method,one can analyze the balancedness,the nonlinearity,and the input-output correlation of vectorial Boolean functions.The basic idea of this method is to compute the refined covers of some parametric Boolean polynomial systems which are equivalent to these problems.By a refined cover,the parameter space is divided into several disjoint components,and on each component,the parametric Boolean polynomial system has a fixed number of solutions.An efficient algorithm based on the characteristic set method to compute refined covers of parametric Boolean polynomial systems is presented.The experimental results about some instances generated from cryptanalysis show that this new method is efficient and can solve some instances which can not be solved in reasonable time by other methods.展开更多
Feather weight(FeW)cipher is a lightweight block cipher proposed by Kumar et al.in 2019,which takes 64 bits plaintext as input and produces 64 bits ciphertext.As Kumar et al.said,FeW is a software oriented design with...Feather weight(FeW)cipher is a lightweight block cipher proposed by Kumar et al.in 2019,which takes 64 bits plaintext as input and produces 64 bits ciphertext.As Kumar et al.said,FeW is a software oriented design with the aim of achieving high efficiency in software based environments.It seems that FeW is immune to many cryptographic attacks,like linear,impossible differential,differential and zero correlation attacks.However,in recent work,Xie et al.reassessed the security of FeW.More precisely,they proved that under the differential fault analysis(DFA)on the encryption states,an attacker can completely recover the master secret key.In this paper,we revisit the block cipher FeW and consider the DFA on its key schedule algorithm,which is rather popular cryptanalysis for kinds of block ciphers.In particular,by respectively injected faults into the 30th and 29th round subkeys,one can recover about 55/80~69%bits of master key.Then the brute force searching remaining bits,one can obtain the full master secret key.The simulations and experiment results show that our analysis is practical.展开更多
基金supported by National Natural Science Foundation of China(No.U1936209).
文摘The Learning With Errors(LWE)problem is widely used in lattice-based cryptography,which is the most promising post-quantum cryptography direction.There are a variety of LWE-solving methods,which can be classified into four groups:lattice methods,algebraic methods,combinatorial methods,and exhaustive searching.The Blum–Kalai–Wasserman(BKW)algorithm is an important variety of combinatorial algorithms,which was first presented for solving the Learning Parity With Noise(LPN)problem and then extended to solve LWE.In this paper,we give an overview of BKW algorithms for solving LWE.We introduce the framework and key techniques of BKW algorithms and make comparisons between different BKW algorithms and also with lattice methods by estimating concrete security of specific LWE instances.We also briefly discuss the current problems and potential future directions of BKW algorithms.
文摘Locating bug code snippets(short for BugCode)has been a complex problem throughout the history of software security,mainly because the constraints that define BugCode are obscure and hard to summarize.Previously,security analysts attempted to define such constraints manually(e.g.,limiting buffer size to detect overflow),but were limited to the types of BugCode.Recent researchers address this problem by extracting constraints from program documentation,which shows the potential for API misuse.But for bugs beyond the scope of API misuse,such an approach becomes less effective since the corresponding constraints are not defined in documents,not to mention the programs without documentation In this paper,inspired by the fact that expert programmers often correct the BugCode on open forums such as StackOverflow,we design an approach to automatically extract knowledge from StackOverflow and leverage it to detect BugCode.As we all know,the contexts in StackOverflow come from ordinary developers.Their writing tends to be loosely organized and in various styles,which are more challenging to analyze than program documentation.To address the challenges,we design a custom tokenization approach to segment sentences and employ sentiment analysis to find the Controversial Sentences(CSs)that typically contain the constraints we need for code analysis.Then we use constituency parsing to extract knowledge from CSs,which helps locate Bug-Code.We evaluated our system on 41,144 comments from the questions tagged with Java and Android.The results show that our approach achieves 95.5%precision in discovering CSs.We have discovered 276 pieces of BugCode proved to be true through manual validation including an assigned CVE.89.3%of the discovered bugs remained in the current version of answers,which are unknown to users.
基金IIE authors are supported in part by the National Key R&D Program of China(2020AAA0140001)NSFC U1836211,Beijing Natural Science Foundation(No.M22004),the Anhui Department of Science and Technology under Grant 202103a05020009Youth Innovation Promotion Association CAS,Beijing Academy of Artificial Intelligence(BAAI)and a research grant from Huawei.
文摘Clone detection has received much attention in many fields such as malicious code detection,vulnerability hunting,and code copyright infringement detection.However,cyber criminals may obfuscate code to impede violation detection.To date,few studies have investigated the robustness of clone detectors,especially in-fashion deep learning-based ones,against obfuscation.Meanwhile,most of these studies only measure the difference between one code snippet and its obfuscation version.However,in reality,the attackers may modify the original code before obfuscating it.Then what we should evaluate is the detection of obfuscated code from cloned code,not the original code.For this,we conduct a comprehensive study evaluating 3 popular deep-learning based clone detectors and 6 commonly used traditional ones.Regarding the data,we collect 6512 clone pairs of five types from the dataset BigCloneBench and obfuscate one program of each pair via 64 strategies of 6 state-of-art commercial obfuscators.We also collect 1424 non-clone pairs to evaluate the false positives.In sum,a benchmark of 524,148 code pairs(either clone or not)are generated,which are passed to clone detectors for evaluation.To automate the evaluation,we develop one uniform evaluation framework,integrating the clone detectors and obfuscators.The results bring us interesting findings on how obfuscation affects the performance of clone detection and what is the difference between traditional and deep learning-based clone detectors.In addition,we conduct manual code reviews to uncover the root cause of the phenomenon and give suggestions to users from different perspectives.
基金NSFC(92270204),Beijing Natural Science Foundation(No.M22004),Beijing Nova program.
文摘Virtual personal assistants(VPAs),such as Amazon Alexa and Google Assistant,are software agents designed to perform tasks or provide services to individuals in response to user commands.VPAs extend their functions through third-party voice apps,thereby attracting more users to use VPA-equipped products.Previous studies demonstrate vulnerabilities in the certification,installation,and usage of these third-party voice apps.However,these studies focus on individual apps.To the best of our knowledge,there is no prior research that explores the correlations among voice apps.Voice apps represent a new type of applications that interact with users mainly through a voice user interface instead of a graphical user interface,requiring a distinct approach to analysis.In this study,we present a novel voice app similarity analysis approach to analyze voice apps in the market from a new perspective.Our approach,called SkillSim,detects similarities among voice apps(i.e.skills)based on two dimensions:text similarity and structure similarity.SkillSim measures 30,000 voice apps in the Amazon skill market and reveals that more than 25.9%have at least one other skill with a text similarity greater than 70%.Our analysis identifies several factors that contribute to a high number of similar skills,including the assistant development platforms and their limited templates.Additionally,we observe interesting phenomena,such as developers or platforms creating multiple similar skills with different accounts for purposes such as advertising.Furthermore,we also find that some assistant development platforms develop multiple similar but non-compliant skills,such as requesting user privacy in a non-compliance way,which poses a security risk.Based on the similarity analysis results,we have a deeper understanding of voice apps in the mainstream market.
基金supported by the National Key Basic Research Program of China under Grant No.2013CB834203the National Natural Science Foundation of China under Grant Nos.61472417 and 61472120the Research Council of Norway
文摘This paper proposes a general method to construct 1-resilient Boolean functions by modifying the Tu-Deng and Tang-Carlet-Tang functions. Cryptographic properties such as algebraic degree, nonlinearity and algebraic immunity are also considered. A sufficient condition of the modified func- tions with optimal algebraic degree in terms of the Siegenthaler bound is proposed. The authors obtain a lower bound on the nonlinearity of the Tang-Carlet-Tang functions, which is slightly better than the known result. If the authors do not break the "continuity" of the support and zero sets, the functions constructed in this paper have suboptimal algebraic immunity. Finally, four specific classes of 1-resilient Boolean functions constructed from this construction and with the mentioned good cryptographic properties are proposed. Experimental results show that there are many 1-resilient Boolean functions have higher nonlinearities than known l-resilient functions modified by Tu-Deng and Tang- Carlet-Tang functions.
基金supported in part by the CAS Project QYZDJ-SSW-SYS022the National Natural Science Foundation of China under Grant No.61877058the Strategy Cooperation Project AQ-1701
文摘Weispfenning in 1992 introduced the concepts of comprehensive Gr?bner system/basis of a parametric polynomial system, and he also presented an algorithm to compute them. Since then,this research ?eld has attracted much attention over the past several decades, and many effcient algorithms have been proposed. Moreover, these algorithms have been applied to many different ?elds,such as parametric polynomial equations solving, geometric theorem proving and discovering, quanti?er elimination, and so on. This survey brings together the works published between 1992 and 2018, and we hope that this survey is valuable for this research area.
基金supported by the National Key Basic Research Program of China under Grant Nos.2013CB834203 and 2011CB302400the National Nature Science Foundation of China under Grant Nos.11301523,11371356,61121062+1 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No.XDA06010701IEE’s Research Project on Cryptography under Grant Nos.Y3Z0013102,Y3Z0018102,and Y4Z0061A02
文摘Some techniques using linear algebra was introduced by Faugore in F4 to speed up the reduction process during Grobner basis computations. These techniques can also be used in fast implementations of F5 and some other signature-based Grobner basis algorithms. When these techniques are applied, a very important step is constructing matrices from critical pairs and existing polynomials by the Symbolic Preprocessing function (given in F4). Since multiplications of monomials and polynomials are involved in the Symbolic Preprocessing function, this step can be very costly when the number of involved polynomials/monomials is huge. In this paper, multiplications of monomials and polynomials for a Boolean polynomial ring are investigated and a specific method of implementing the Symbolic Preprocessing function over Boolean polynomial rings is reported. Many examples have been tested by using this method, and the experimental data shows that the new method is very efficient.
基金supported by National Key R&D Program of China(No.2016QY04W0805)NSFC U1536106,61728209+2 种基金National Top-notch Youth Talents Program of ChinaYouth Innovation Promotion Association CASBeijing Nova Program。
文摘Telecommunication fraud has continuously been causing severe financial loss to telecommunication customers in China for several years.Traditional approaches to detect telecommunication frauds usually rely on constructing a blacklist of fraud telephone numbers.However,attackers can simply evade such detection by changing their numbers,which is very easy to achieve through VoIP(Voice over IP).To solve this problem,we detect telecommunication frauds from the contents of a call instead of simply through the caller’s telephone number.Particularly,we collect descriptions of telecommunication fraud from news reports and social media.We use machine learning algorithms to analyze data and to select the high-quality descriptions from the data collected previously to construct datasets.Then we leverage natural language processing to extract features from the textual data.After that,we build rules to identify similar contents within the same call for further telecommunication fraud detection.To achieve online detection of telecommunication frauds,we develop an Android application which can be installed on a customer’s smartphone.When an incoming fraud call is answered,the application can dynamically analyze the contents of the call in order to identify frauds.Our results show that we can protect customers effectively.
基金supported by National Key Basic Research Project of China (Grant No.2011CB302400)National Natural Science Foundation of China (Grant Nos. 10971217, 60970152 and 61121062)IIE'S Research Project on Cryptography (Grant No. Y3Z0013102)
文摘An efficient algorithm is proposed for factoring polynomials over an algebraic extension field defined by a polynomial ring modulo a maximal ideal. If the maximal ideal is given by its CrSbner basis, no extra Grbbner basis computation is needed for factoring a polynomial over this extension field. Nothing more than linear algebraic technique is used to get a characteristic polynomial of a generic linear map. Then this polynomial is factorized over the ground field. From its factors, the factorization of the polynomial over the extension field is obtained. The algorithm has been implemented in Magma and computer experiments indicate that it is very efficient, particularly for complicated examples.
基金supported in part by National Key R&D Program of China(No.2016QY04W0805)NSFC U1536106,61728209+3 种基金National Top-notch Youth Talents Program of ChinaYouth Innovation Promotion Association CASBeijing Nova Program and a research grant from Ant Financialpartly supported by International Cooperation Program on CyberSecurity,administered by SKLOIS,Institute of Information Engineering,Chinese Academy of Sciences,China(No.SNSBBH-2017111036).
文摘A precise representation for attacks can benefit the detection of malware in both accuracy and efficiency.However,it is still far from expectation to describe attacks precisely on the Android platform.In addition,new features on Android,such as communication mechanisms,introduce new challenges and difficulties for attack detection.In this paper,we propose abstract attack models to precisely capture the semantics of various Android attacks,which include the corresponding targets,involved behaviors as well as their execution dependency.Meanwhile,we construct a novel graph-based model called the inter-component communication graph(ICCG)to describe the internal control flows and inter-component communications of applications.The models take into account more communication channel with a maximized preservation of their program logics.With the guidance of the attack models,we propose a static searching approach to detect attacks hidden in ICCG.To reduce false positive rate,we introduce an additional dynamic confirmation step to check whether the detected attacks are false alarms.Experiments show that DROIDECHO can detect attacks in both benchmark and real-world applications effectively and efficiently with a precision of 89.5%.
基金supported by National Natural Science Foundation of China(Grant No.60970152)IIE's Research Project on Cryptography(Grant No.Y3Z0011102)+1 种基金the Strategic Priority Research Program of Chinese Academy of Sciences(Grant No.XDA06010701)National Key Basic Research Program of China(973 Program)(Grant No.2011CB302400)
文摘In this paper, we first review the existing proofs of the Boneh-Franklin identity-based encryption scheme (BF-IBE for short), and show how to admit a new proof by slightly modifying the specifications of the hash functions of the original BF-IBE. Compared with prior proofs, our new proof provides a tighter security reduction and minimizes the use of random oracles, thus indicates BF-IBE has better provable security with our new choices of hash functions. The techniques developed in our proof can also be applied to improving security analysis of some other IBE schemes. As an independent technical contribution, we also give a rigorous proof of the Fujisaki-Okamoto (FO) transformation in the case of CPA-to-CCA, which demonstrates the efficiency of the FO-transformation (CPA-to-CCA), in terms of the tightness of security reduction, has long been underestimated. This result can remarkably benefit the security proofs of encryption schemes using the FO-transformation for CPA-to-CCA enhancement.
基金supported by the National Nature Science Foundation of China under Grant Nos.61877058,61872359the Strategy Cooperation Project under Grant No.AQ-1701the CAS Project under Grant No.QYZDJ-SSW-SYS022
文摘The GVW algorithm is an effcient signature-based algorithm for computing Gr?bner bases.In this paper, the authors consider the implementation of the GVW algorithm by using linear algebra,and speed up GVW via a substituting method. As it is well known that, most of the computing time of a Gr?bner basis is spent on reductions of polynomials. Thus, linear algebraic techniques, such as matrix operations, have been used extensively to speed up the implementations. Particularly, one-direction(also called signature-safe) reduction is used in signature-based algorithms, because polynomials(or rows in matrices) with larger signatures can only be reduced by polynomials(rows) with smaller signatures. The authors propose a new method to construct sparser matrices for signature-based algorithms via a substituting method. Speci?cally, instead of only storing the original polynomials in GVW, the authors also record many equivalent but sparser polynomials at the same time. In matrix construction, denser polynomials are substituted by sparser equivalent ones. As the matrices get sparser, they can be eliminated more effciently. Two speci?cal algorithms, Block-GVW and LMGVW, are presented, and their combination is the Sub-GVW algorithm. The correctness of the new proposed method is proved, and the experimental results demonstrate the effciency of this new method.
基金supported in part by the National Natural Science Foundation of China under Grant No.11371356&61877058CAS Project QYZDJ-SSW-SYS022the Strategy Cooperation Project AQ-1701。
文摘This paper studies the problem of constructing lightweight involutory maximal distance separable(MDS)matrices.The authors find the exact lower bound of the XOR counts for 4×4 involutory MDS matrices over F2^4.Further,some new structures of 4×4 involutory MDS matrices over F2^mare provided to construct involutory MDS matrices and the authors constructed the lightest4×4 involutory MDS matrices over F2^8so far by using these structures.
基金supported in part by the National Natural Science Foundation of China under Grant No.11371356CAS Project QYZDJ-SSW-SYS022the Strategy Cooperation Project AQ-1701
文摘This paper presents a new algorithm for computing the extended Hensel construction(EHC) of multivariate polynomials in main variable x and sub-variables u1, u2, ···, um over a number field K. This algorithm first constructs a set by using the resultant of two initial coprime factors w.r.t. x, and then obtains the Hensel factors by comparing the coefficients of xi on both sides of an equation. Since the Hensel factors are polynomials of the main variable with coefficients in fraction field K(u1, u2, ···, um), the computation cost of handling rational functions can be high. Therefore,the authors use a method which multiplies resultant and removes the denominators of the rational functions. Unlike previously-developed algorithms that use interpolation functions or Grobner basis, the algorithm relies little on polynomial division, and avoids multiplying by different factors when removing the denominators of Hensel factors. All algorithms are implemented using Magma, a computational algebra system and experiments indicate that our algorithm is more efficient.
基金Our research was supported by NSFC U1836211.And the recipient is Professor Kai Chen.
文摘Decompilation aims to analyze and transform low-level program language(PL)codes such as binary code or assembly code to obtain an equivalent high-level PL.Decompilation plays a vital role in the cyberspace security fields such as software vulnerability discovery and analysis,malicious code detection and analysis,and software engineering fields such as source code analysis,optimization,and cross-language cross-operating system migration.Unfortunately,the existing decompilers mainly rely on experts to write rules,which leads to bottlenecks such as low scalability,development difficulties,and long cycles.The generated high-level PL codes often violate the code writing specifications.Further,their readability is still relatively low.The problems mentioned above hinder the efficiency of advanced applications(e.g.,vulnerability discovery)based on decompiled high-level PL codes.In this paper,we propose a decompilation approach based on the attention-based neural machine translation(NMT)mechanism,which converts low-level PL into high-level PL while acquiring legibility and keeping functionally similar.To compensate for the information asymmetry between the low-level and high-level PL,a translation method based on basic operations of low-level PL is designed.This method improves the generalization of the NMT model and captures the translation rules between PLs more accurately and efficiently.Besides,we implement a neural decompilation framework called Neutron.The evaluation of two practical applications shows that Neutron’s average program accuracy is 96.96%,which is better than the traditional NMT model.
文摘In recent years,the widespread applications of open-source software(OSS)have brought great convenience for software developers.However,it is always facing unavoidable security risks,such as open-source code defects and security vulnerabilities.To find out the OSS risks in time,we carry out an empirical study to identify the indicators for evaluating the OSS.To achieve a comprehensive understanding of the OSS assessment,we collect 56 papers from prestigious academic venues(such as IEEE Xplore,ACM Digital Library,DBLP,and Google Scholar)in the past 21 years.During the process of the investigation,we first identify the main concerns for selecting OSS and distill five types of commonly used indicators to assess OSS.We then conduct a comparative analysis to discuss how these indicators are used in each surveyed study and their differences.Moreover,we further undertake a correlation analysis between these indicators and uncover 13 confirmed conclusions and four cases with controversy occurring in these studies.Finally,we discuss several possible applications of these conclusions,which are insightful for the research on OSS and software supply chain.
基金the National Natural Science Foundation of China under Grant Nos.61977060 and 61877058。
文摘In this paper,a new method to analyze Boolean functions is proposed.By this method,one can analyze the balancedness,the nonlinearity,and the input-output correlation of vectorial Boolean functions.The basic idea of this method is to compute the refined covers of some parametric Boolean polynomial systems which are equivalent to these problems.By a refined cover,the parameter space is divided into several disjoint components,and on each component,the parametric Boolean polynomial system has a fixed number of solutions.An efficient algorithm based on the characteristic set method to compute refined covers of parametric Boolean polynomial systems is presented.The experimental results about some instances generated from cryptanalysis show that this new method is efficient and can solve some instances which can not be solved in reasonable time by other methods.
基金supported in part by the Foundation of State Key Laboratory of Information Security under Grant 2021-MS-04in part by the Natural Science Foundation of Shaanxi Province under grant 2022-JM-365.
文摘Feather weight(FeW)cipher is a lightweight block cipher proposed by Kumar et al.in 2019,which takes 64 bits plaintext as input and produces 64 bits ciphertext.As Kumar et al.said,FeW is a software oriented design with the aim of achieving high efficiency in software based environments.It seems that FeW is immune to many cryptographic attacks,like linear,impossible differential,differential and zero correlation attacks.However,in recent work,Xie et al.reassessed the security of FeW.More precisely,they proved that under the differential fault analysis(DFA)on the encryption states,an attacker can completely recover the master secret key.In this paper,we revisit the block cipher FeW and consider the DFA on its key schedule algorithm,which is rather popular cryptanalysis for kinds of block ciphers.In particular,by respectively injected faults into the 30th and 29th round subkeys,one can recover about 55/80~69%bits of master key.Then the brute force searching remaining bits,one can obtain the full master secret key.The simulations and experiment results show that our analysis is practical.