In this paper, we present reduction algorithms based on the principle of Skowron's discernibility matrix - the ordered attributes method. The completeness of the algorithms for Pawlak reduct and the uniqueness for...In this paper, we present reduction algorithms based on the principle of Skowron's discernibility matrix - the ordered attributes method. The completeness of the algorithms for Pawlak reduct and the uniqueness for a given order of the attributes are proved. Since a discernibility matrix requires the size of the memory of U2, U is a universe of objects, it would be impossible to apply these algorithms directly to a massive object set. In order to solve the problem, a so-called quasi-discernibility matrix and two reduction algorithms are proposed. Although the proposed algorithms are incomplete for Pawlak reduct, their opimal paradigms ensure the completeness as long as they satisfy some conditions. Finally we consider the problem on the reduction of distributive object sets.展开更多
The screening of novel materials with good performance and the modelling of quantitative structureactivity relationships(QSARs),among other issues,are hot topics in the field of materials science.Traditional experimen...The screening of novel materials with good performance and the modelling of quantitative structureactivity relationships(QSARs),among other issues,are hot topics in the field of materials science.Traditional experiments and computational modelling often consume tremendous time and resources and are limited by their experimental conditions and theoretical foundations.Thus,it is imperative to develop a new method of accelerating the discovery and design process for novel materials.Recently,materials discovery and design using machine learning have been receiving increasing attention and have achieved great improvements in both time efficiency and prediction accuracy.In this review,we first outline the typical mode of and basic procedures for applying machine learning in materials science,and we classify and compare the main algorithms.Then,the current research status is reviewed with regard to applications of machine learning in material property prediction,in new materials discovery and for other purposes.Finally,we discuss problems related to machine learning in materials science,propose possible solutions,and forecast potential directions of future research.By directly combining computational studies with experiments,we hope to provide insight into the parameters that affect the properties of materials,thereby enabling more efficient and target-oriented research on materials discovery and design.展开更多
Structural health monitoring (SHM) is a multi-discipline field that involves the automatic sensing of structural loads and response by means of a large number of sensors and instruments, followed by a diagnosis of the...Structural health monitoring (SHM) is a multi-discipline field that involves the automatic sensing of structural loads and response by means of a large number of sensors and instruments, followed by a diagnosis of the structural health based on the collected data. Because an SHM system implemented into a structure automatically senses, evaluates, and warns about structural conditions in real time, massive data are a significant feature of SHM. The techniques related to massive data are referred to as data science and engineering, and include acquisition techniques, transition techniques, management techniques, and processing and mining algorithms for massive data. This paper provides a brief review of the state of the art of data science and engineering in SHM as investigated by these authors, and covers the compressive sampling-based data-acquisition algorithm, the anomaly data diagnosis approach using a deep learning algorithm, crack identification approaches using computer vision techniques, and condition assessment approaches for bridges using machine learning algorithms. Future trends are discussed in the conclusion.展开更多
Computer vision techniques, in conjunction with acquisition through remote cameras and unmanned aerial vehicles (UAVs), offer promising non-contact solutions to civil infrastructure condition assessment. The ultimate ...Computer vision techniques, in conjunction with acquisition through remote cameras and unmanned aerial vehicles (UAVs), offer promising non-contact solutions to civil infrastructure condition assessment. The ultimate goal of such a system is to automatically and robustly convert the image or video data into actionable information. This paper provides an overview of recent advances in computer vision techniques as they apply to the problem of civil infrastructure condition assessment. In particular, relevant research in the fields of computer vision, machine learning, and structural engineering is presented. The work reviewed is classified into two types: inspection applications and monitoring applications. The inspection applications reviewed include identifying context such as structural components, characterizing local and global visible damage, and detecting changes from a reference image. The monitoring applications discussed include static measurement of strain and displacement, as well as dynamic measurement of displacement for modal analysis. Subsequently, some of the key challenges that persist toward the goal of automated vision-based civil infrastructure and monitoring are presented. The paper concludes with ongoing work aimed at addressing some of these stated challenges.展开更多
With a ten-year horizon from concept to reality, it is time now to start thinking about what will the sixth-generation(6G) mobile communications be on the eve of the fifth-generation(5G) deployment. To pave the way fo...With a ten-year horizon from concept to reality, it is time now to start thinking about what will the sixth-generation(6G) mobile communications be on the eve of the fifth-generation(5G) deployment. To pave the way for the development of 6G and beyond, we provide 6G visions in this paper. We first introduce the state-of-the-art technologies in 5G and indicate the necessity to study 6G. By taking the current and emerging development of wireless communications into consideration, we envision 6G to include three major aspects, namely, mobile ultra-broadband, super Internet-of-Things(IoT), and artificial intelligence(AI). Then, we review key technologies to realize each aspect. In particular, teraherz(THz) communications can be used to support mobile ultra-broadband, symbiotic radio and satellite-assisted communications can be used to achieve super IoT, and machine learning techniques are promising candidates for AI. For each technology, we provide the basic principle, key challenges, and state-of-the-art approaches and solutions.展开更多
Stable and safe operation of power grids is an important guarantee for economy development.Support Vector Machine(SVM)based stability analysis method is a significant method started in the last century.However,the SVM...Stable and safe operation of power grids is an important guarantee for economy development.Support Vector Machine(SVM)based stability analysis method is a significant method started in the last century.However,the SVM method has several drawbacks,e.g.low accuracy around the hyperplane and heavy computational burden when dealing with large amount of data.To tackle the above problems of the SVM model,the algorithm proposed in this paper is optimized from three aspects.Firstly,the gray area of the SVM model is judged by the probability output and the corresponding samples are processed.Therefore the clustering of the samples in the gray area is improved.The problem of low accuracy in the training of the SVM model in the gray area is improved,while the size of the sample is reduced and the efficiency is improved.Finally,by adjusting the model of the penalty factor in the SVM model after the clustering of the samples,the number of samples with unstable states being misjudged as stable is reduced.Test results on the IEEE 118-bus test system verify the proposed method.展开更多
Due to increasing complexity, uncertainty and data dimensions in power systems, conventional methods often meet bottlenecks when attempting to solve decision and control prob- lems. Therefore, data-driven methods towa...Due to increasing complexity, uncertainty and data dimensions in power systems, conventional methods often meet bottlenecks when attempting to solve decision and control prob- lems. Therefore, data-driven methods toward solving such prob- lems are being extensively studied. Deep reinforcement learning (DRL) is one of these data-driven methods and is regarded as real artificial intelligence (AI). DRL is a combination of deep learning (DL) and reinforcement learning (RL). This field of research has been applied to solve a wide range of complex sequential decision-making problems, including those in power systems. This paper firstly reviews the basic ideas, models, algorithms and techniques of DRL. Applications in power systems such as energy management, demand response, electricity market, operational control, and others are then considered. In addition, recent advances in DRL including the combination of RL with other classical methods, and the prospect and challenges of applications in power systems are also discussed.展开更多
Digital twin(DT)framework is introduced in the context of application for power grid online analysis.In the development process of a new power grid real-time online analysis system,an online analysis digital twin(OADT...Digital twin(DT)framework is introduced in the context of application for power grid online analysis.In the development process of a new power grid real-time online analysis system,an online analysis digital twin(OADT)has been implemented to realize the new online analysis architecture.The OADT approach is presented and its prominent features are discussed.The presentation,discussion,and performance testing are based on a large-scale grid network model(40K+buses),exported directly from the EMS system of an actual power grid.A plan to apply the OADT approach to digitize power grid dispatching rules is also outlined.展开更多
Predicting the tunneling-induced maximum ground surface settlement is a complex problem since the settlement depends on plenty of intrinsic and extrinsic factors.This study investigates the efficiency and feasibility ...Predicting the tunneling-induced maximum ground surface settlement is a complex problem since the settlement depends on plenty of intrinsic and extrinsic factors.This study investigates the efficiency and feasibility of six machine learning(ML)algorithms,namely,back-propagation neural network,wavelet neural network,general regression neural network(GRNN),extreme learning machine,support vector machine and random forest(RF),to predict tunneling?induced settlement.Field data sets including geological conditions,shield operational parameters,and tunnel geometry collected from four sections of tunnel with a total of 3.93 km are used to build models.Three indicators,mean absolute error,root mean absolute error,and coefficient of determination the(7?2)are used to demonstrate the performance of each computational model.The results indicated that ML algorithms have great potential to predict tunneling-induced settlement,compared with the traditional multivariate linear regression method.GRNN and RF algorithms show the best performance among six ML algorithms,which accurately recognize the evolution of tunneling-induced settlement.The correlation between the input variables and settlement is also investigated by Pearson correlation coefficient.展开更多
The development of machine learning in complex system is hindered by two problems nowadays.The first problem is the inefficiency of exploration in state and action space,which leads to the data-hungry of some state-of...The development of machine learning in complex system is hindered by two problems nowadays.The first problem is the inefficiency of exploration in state and action space,which leads to the data-hungry of some state-of-art data-driven algorithm.The second problem is the lack of a general theory which can be used to analyze and implement a complex learning system.In this paper,we proposed a general methods that can address both two issues.We combine the concepts of descriptive learning,predictive learning,and prescriptive learning into a uniform framework,so as to build a parallel system allowing learning system improved by self-boosting.Formulating a new perspective of data,knowledge and action,we provide a new methodology called parallel learning to design machine learning system for real-world problems.展开更多
Lithium-ion batteries have become the third-generation space batteries and are widely utilized in a series of spacecraft. Remaining Useful Life (RUL) estimation is essential to a spacecraft as the battery is a criti...Lithium-ion batteries have become the third-generation space batteries and are widely utilized in a series of spacecraft. Remaining Useful Life (RUL) estimation is essential to a spacecraft as the battery is a critical part and determines the lifetime and reliability. The Relevance Vector Machine (RVM) is a data-driven algorithm used to estimate a battery's RUL due to its sparse feature and uncertainty management capability. Especially, some of the regressive cases indicate that the RVM can obtain a better short-term prediction performance rather than long-term prediction. As a nonlinear kernel learning algorithm, the coefficient matrix and relevance vectors are fixed once the RVM training is conducted. Moreover, the RVM can be simply influenced by the noise with the training data. Thus, this work proposes an iterative updated approach to improve the long-term prediction performance for a battery's RUL prediction. Firstly, when a new estimator is output by the RVM, the Kalman filter is applied to optimize this estimator with a physical degradation model. Then, this optimized estimator is added into the training set as an on-line sample, the RVM model is re-trained, and the coefficient matrix and relevance vectors can be dynamically adjusted to make next iterative prediction. Experimental results with a commercial battery test data set and a satellite battery data set both indicate that the proposed method can achieve a better performance for RUL estimation.展开更多
Let there be light-to change the world we want to be!Over the past several decades,and ever since the birth of the first laser,mankind has witnessed the development of the science of light,as light-based technologies ...Let there be light-to change the world we want to be!Over the past several decades,and ever since the birth of the first laser,mankind has witnessed the development of the science of light,as light-based technologies have revolutionarily changed our lives.Needless to say,photonics has now penetrated into many aspects of science and technology,turning into an important and dynamically changing field of increasing interdisciplinary interest.In this inaugural issue of eLight,we highlight a few emerging trends in photonics that we think are likely to have major impact at least in the upcoming decade,spanning from integrated quantum photonics and quantum computing,through topological/non-Hermitian photonics and topological insulator lasers,to AI-empowered nanophotonics and photonic machine learning.This Perspective is by no means an attempt to summarize all the latest advances in photonics,yet we wish our subjective vision could fuel inspiration and foster excitement in scientific research especially for young researchers who love the science of light.展开更多
文摘In this paper, we present reduction algorithms based on the principle of Skowron's discernibility matrix - the ordered attributes method. The completeness of the algorithms for Pawlak reduct and the uniqueness for a given order of the attributes are proved. Since a discernibility matrix requires the size of the memory of U2, U is a universe of objects, it would be impossible to apply these algorithms directly to a massive object set. In order to solve the problem, a so-called quasi-discernibility matrix and two reduction algorithms are proposed. Although the proposed algorithms are incomplete for Pawlak reduct, their opimal paradigms ensure the completeness as long as they satisfy some conditions. Finally we consider the problem on the reduction of distributive object sets.
基金This work was supported by the National Natural Science Foundation of China(Grant Nos.U1630134,51622207 and 51372228)the National Key Research and Development Program of China(Grant Nos.2017YFB0701600 and 2017YFB0701500)+2 种基金the Shanghai Institute of Materials Genome from the Shanghai Municipal Science and Technology Commission(Grant No.14DZ2261200)the Shanghai Municipal Education Commission(Grant No.14ZZ099)the Natural Science Foundation of Shanghai(Grant No.16ZR1411200).
文摘The screening of novel materials with good performance and the modelling of quantitative structureactivity relationships(QSARs),among other issues,are hot topics in the field of materials science.Traditional experiments and computational modelling often consume tremendous time and resources and are limited by their experimental conditions and theoretical foundations.Thus,it is imperative to develop a new method of accelerating the discovery and design process for novel materials.Recently,materials discovery and design using machine learning have been receiving increasing attention and have achieved great improvements in both time efficiency and prediction accuracy.In this review,we first outline the typical mode of and basic procedures for applying machine learning in materials science,and we classify and compare the main algorithms.Then,the current research status is reviewed with regard to applications of machine learning in material property prediction,in new materials discovery and for other purposes.Finally,we discuss problems related to machine learning in materials science,propose possible solutions,and forecast potential directions of future research.By directly combining computational studies with experiments,we hope to provide insight into the parameters that affect the properties of materials,thereby enabling more efficient and target-oriented research on materials discovery and design.
基金the National Natural Science Foundation of China (51638007, 51478149, 51678203,and 51678204).
文摘Structural health monitoring (SHM) is a multi-discipline field that involves the automatic sensing of structural loads and response by means of a large number of sensors and instruments, followed by a diagnosis of the structural health based on the collected data. Because an SHM system implemented into a structure automatically senses, evaluates, and warns about structural conditions in real time, massive data are a significant feature of SHM. The techniques related to massive data are referred to as data science and engineering, and include acquisition techniques, transition techniques, management techniques, and processing and mining algorithms for massive data. This paper provides a brief review of the state of the art of data science and engineering in SHM as investigated by these authors, and covers the compressive sampling-based data-acquisition algorithm, the anomaly data diagnosis approach using a deep learning algorithm, crack identification approaches using computer vision techniques, and condition assessment approaches for bridges using machine learning algorithms. Future trends are discussed in the conclusion.
基金supported in part by funding from the US Army Corps of Engineers under a project entitled ‘‘Cybermodeling: A Digital Surrogate Approach for Optimal Risk-Based Operations and Infrastructure” (W912HZ-17-2-0024)
文摘Computer vision techniques, in conjunction with acquisition through remote cameras and unmanned aerial vehicles (UAVs), offer promising non-contact solutions to civil infrastructure condition assessment. The ultimate goal of such a system is to automatically and robustly convert the image or video data into actionable information. This paper provides an overview of recent advances in computer vision techniques as they apply to the problem of civil infrastructure condition assessment. In particular, relevant research in the fields of computer vision, machine learning, and structural engineering is presented. The work reviewed is classified into two types: inspection applications and monitoring applications. The inspection applications reviewed include identifying context such as structural components, characterizing local and global visible damage, and detecting changes from a reference image. The monitoring applications discussed include static measurement of strain and displacement, as well as dynamic measurement of displacement for modal analysis. Subsequently, some of the key challenges that persist toward the goal of automated vision-based civil infrastructure and monitoring are presented. The paper concludes with ongoing work aimed at addressing some of these stated challenges.
基金supported in part by National Natural Science Foundation of China under Grants 61631005, 61801101, U1801261, and 61571100
文摘With a ten-year horizon from concept to reality, it is time now to start thinking about what will the sixth-generation(6G) mobile communications be on the eve of the fifth-generation(5G) deployment. To pave the way for the development of 6G and beyond, we provide 6G visions in this paper. We first introduce the state-of-the-art technologies in 5G and indicate the necessity to study 6G. By taking the current and emerging development of wireless communications into consideration, we envision 6G to include three major aspects, namely, mobile ultra-broadband, super Internet-of-Things(IoT), and artificial intelligence(AI). Then, we review key technologies to realize each aspect. In particular, teraherz(THz) communications can be used to support mobile ultra-broadband, symbiotic radio and satellite-assisted communications can be used to achieve super IoT, and machine learning techniques are promising candidates for AI. For each technology, we provide the basic principle, key challenges, and state-of-the-art approaches and solutions.
基金This work was supported by China’s National key research and development program 2017YFB0902201National Natural Science Foundation of China under Grant 51777104Science and Technology Project of the State Grid Corporation of China.
文摘Stable and safe operation of power grids is an important guarantee for economy development.Support Vector Machine(SVM)based stability analysis method is a significant method started in the last century.However,the SVM method has several drawbacks,e.g.low accuracy around the hyperplane and heavy computational burden when dealing with large amount of data.To tackle the above problems of the SVM model,the algorithm proposed in this paper is optimized from three aspects.Firstly,the gray area of the SVM model is judged by the probability output and the corresponding samples are processed.Therefore the clustering of the samples in the gray area is improved.The problem of low accuracy in the training of the SVM model in the gray area is improved,while the size of the sample is reduced and the efficiency is improved.Finally,by adjusting the model of the penalty factor in the SVM model after the clustering of the samples,the number of samples with unstable states being misjudged as stable is reduced.Test results on the IEEE 118-bus test system verify the proposed method.
基金This work is supported by National Natural Science Foundation of China under Grant No.61571296the National Key Research and Development Program of China under 2018YFF0214705.
文摘Due to increasing complexity, uncertainty and data dimensions in power systems, conventional methods often meet bottlenecks when attempting to solve decision and control prob- lems. Therefore, data-driven methods toward solving such prob- lems are being extensively studied. Deep reinforcement learning (DRL) is one of these data-driven methods and is regarded as real artificial intelligence (AI). DRL is a combination of deep learning (DL) and reinforcement learning (RL). This field of research has been applied to solve a wide range of complex sequential decision-making problems, including those in power systems. This paper firstly reviews the basic ideas, models, algorithms and techniques of DRL. Applications in power systems such as energy management, demand response, electricity market, operational control, and others are then considered. In addition, recent advances in DRL including the combination of RL with other classical methods, and the prospect and challenges of applications in power systems are also discussed.
基金This work was supported by National Natural Science Foundation of China under the Grant U1766214.
文摘Digital twin(DT)framework is introduced in the context of application for power grid online analysis.In the development process of a new power grid real-time online analysis system,an online analysis digital twin(OADT)has been implemented to realize the new online analysis architecture.The OADT approach is presented and its prominent features are discussed.The presentation,discussion,and performance testing are based on a large-scale grid network model(40K+buses),exported directly from the EMS system of an actual power grid.A plan to apply the OADT approach to digitize power grid dispatching rules is also outlined.
基金The present work was carried out with the support of Research Program of Changsha Science and Technology Bureau(cskq 1703051)the National Natural Science Foundation of China(Grant Nos.41472244 and 51878267)+1 种基金the Industrial Technology and Development Program of Zhongjian Tunnel Construction Co.,Ltd.(17430102000417)Natural Science Foundation of Hunan Province,China(2019JJ30006).
文摘Predicting the tunneling-induced maximum ground surface settlement is a complex problem since the settlement depends on plenty of intrinsic and extrinsic factors.This study investigates the efficiency and feasibility of six machine learning(ML)algorithms,namely,back-propagation neural network,wavelet neural network,general regression neural network(GRNN),extreme learning machine,support vector machine and random forest(RF),to predict tunneling?induced settlement.Field data sets including geological conditions,shield operational parameters,and tunnel geometry collected from four sections of tunnel with a total of 3.93 km are used to build models.Three indicators,mean absolute error,root mean absolute error,and coefficient of determination the(7?2)are used to demonstrate the performance of each computational model.The results indicated that ML algorithms have great potential to predict tunneling-induced settlement,compared with the traditional multivariate linear regression method.GRNN and RF algorithms show the best performance among six ML algorithms,which accurately recognize the evolution of tunneling-induced settlement.The correlation between the input variables and settlement is also investigated by Pearson correlation coefficient.
基金supported in part by the National Natural Science Foundation of China(91520301)
文摘The development of machine learning in complex system is hindered by two problems nowadays.The first problem is the inefficiency of exploration in state and action space,which leads to the data-hungry of some state-of-art data-driven algorithm.The second problem is the lack of a general theory which can be used to analyze and implement a complex learning system.In this paper,we proposed a general methods that can address both two issues.We combine the concepts of descriptive learning,predictive learning,and prescriptive learning into a uniform framework,so as to build a parallel system allowing learning system improved by self-boosting.Formulating a new perspective of data,knowledge and action,we provide a new methodology called parallel learning to design machine learning system for real-world problems.
基金co-supported in part by the National Natural Science Foundation of China (Nos. 61301205 and 61571160)the Natural Scientific Research Innovation Foundation at Harbin Institute of Technology (No. HIT.NSRIF.2014017)
文摘Lithium-ion batteries have become the third-generation space batteries and are widely utilized in a series of spacecraft. Remaining Useful Life (RUL) estimation is essential to a spacecraft as the battery is a critical part and determines the lifetime and reliability. The Relevance Vector Machine (RVM) is a data-driven algorithm used to estimate a battery's RUL due to its sparse feature and uncertainty management capability. Especially, some of the regressive cases indicate that the RVM can obtain a better short-term prediction performance rather than long-term prediction. As a nonlinear kernel learning algorithm, the coefficient matrix and relevance vectors are fixed once the RVM training is conducted. Moreover, the RVM can be simply influenced by the noise with the training data. Thus, this work proposes an iterative updated approach to improve the long-term prediction performance for a battery's RUL prediction. Firstly, when a new estimator is output by the RVM, the Kalman filter is applied to optimize this estimator with a physical degradation model. Then, this optimized estimator is added into the training set as an on-line sample, the RVM model is re-trained, and the coefficient matrix and relevance vectors can be dynamically adjusted to make next iterative prediction. Experimental results with a commercial battery test data set and a satellite battery data set both indicate that the proposed method can achieve a better performance for RUL estimation.
基金support from the National Key R&D Program of China under Grant(No.2017YFA0303800).MS acknowledges support from the Israel Science Foundation.
文摘Let there be light-to change the world we want to be!Over the past several decades,and ever since the birth of the first laser,mankind has witnessed the development of the science of light,as light-based technologies have revolutionarily changed our lives.Needless to say,photonics has now penetrated into many aspects of science and technology,turning into an important and dynamically changing field of increasing interdisciplinary interest.In this inaugural issue of eLight,we highlight a few emerging trends in photonics that we think are likely to have major impact at least in the upcoming decade,spanning from integrated quantum photonics and quantum computing,through topological/non-Hermitian photonics and topological insulator lasers,to AI-empowered nanophotonics and photonic machine learning.This Perspective is by no means an attempt to summarize all the latest advances in photonics,yet we wish our subjective vision could fuel inspiration and foster excitement in scientific research especially for young researchers who love the science of light.