本文介绍了 Blending L earning(或 Blended L earning)的新含义 ,指出这一新含义的提出和被广泛认同 ,表明国际教育技术界的教育思想观念正在经历又一场深刻的变革 ,也是教育技术理论进一步发展的标志。作者还从对建构主义理论的反思...本文介绍了 Blending L earning(或 Blended L earning)的新含义 ,指出这一新含义的提出和被广泛认同 ,表明国际教育技术界的教育思想观念正在经历又一场深刻的变革 ,也是教育技术理论进一步发展的标志。作者还从对建构主义理论的反思、对信息技术教育应用认识的深化 。展开更多
本文介绍了 Blending L earning(或 Blended L earning)的新含义 ,指出这一新含义的提出和被广泛认同 ,表明国际教育技术界的教育思想观念正在经历又一场深刻的变革 ,也是教育技术理论进一步发展的标志。作者还从对建构主义理论的反思...本文介绍了 Blending L earning(或 Blended L earning)的新含义 ,指出这一新含义的提出和被广泛认同 ,表明国际教育技术界的教育思想观念正在经历又一场深刻的变革 ,也是教育技术理论进一步发展的标志。作者还从对建构主义理论的反思、对信息技术教育应用认识的深化 。展开更多
A new theoretical approach to language has emerged in the past 10-15 years that allows linguistic observations about form-meaning pairings constructions — to be stated directly. Constructionist approaches aim to acco...A new theoretical approach to language has emerged in the past 10-15 years that allows linguistic observations about form-meaning pairings constructions — to be stated directly. Constructionist approaches aim to account for the full range of facts about language, without assuming that a particular subset of the data is part of a privileged 揷ore? Researchers argue that unusual constructions shed light on more general issues, and serve to illuminate what is required for a complete account of language.展开更多
In this paper, we present reduction algorithms based on the principle of Skowron's discernibility matrix - the ordered attributes method. The completeness of the algorithms for Pawlak reduct and the uniqueness for...In this paper, we present reduction algorithms based on the principle of Skowron's discernibility matrix - the ordered attributes method. The completeness of the algorithms for Pawlak reduct and the uniqueness for a given order of the attributes are proved. Since a discernibility matrix requires the size of the memory of U2, U is a universe of objects, it would be impossible to apply these algorithms directly to a massive object set. In order to solve the problem, a so-called quasi-discernibility matrix and two reduction algorithms are proposed. Although the proposed algorithms are incomplete for Pawlak reduct, their opimal paradigms ensure the completeness as long as they satisfy some conditions. Finally we consider the problem on the reduction of distributive object sets.展开更多
Recently, the emergence of pre-trained models(PTMs) has brought natural language processing(NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language rep...Recently, the emergence of pre-trained models(PTMs) has brought natural language processing(NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language representation learning and its research progress. Then we systematically categorize existing PTMs based on a taxonomy from four different perspectives. Next,we describe how to adapt the knowledge of PTMs to downstream tasks. Finally, we outline some potential directions of PTMs for future research. This survey is purposed to be a hands-on guide for understanding, using, and developing PTMs for various NLP tasks.展开更多
We present a method for solving partial differential equations using artificial neural networks and an adaptive collocation strategy.In this procedure,a coarse grid of training points is used at the initial training s...We present a method for solving partial differential equations using artificial neural networks and an adaptive collocation strategy.In this procedure,a coarse grid of training points is used at the initial training stages,while more points are added at later stages based on the value of the residual at a larger set of evaluation points.This method increases the robustness of the neural network approximation and can result in significant computational savings,particularly when the solution is non-smooth.Numerical results are presented for benchmark problems for scalar-valued PDEs,namely Poisson and Helmholtz equations,as well as for an inverse acoustics problem.展开更多
The irregular domain and lack of ordering make it challenging to design deep neural networks for point cloud processing.This paper presents a novel framework named Point Cloud Transformer(PCT)for point cloud learning....The irregular domain and lack of ordering make it challenging to design deep neural networks for point cloud processing.This paper presents a novel framework named Point Cloud Transformer(PCT)for point cloud learning.PCT is based on Transformer,which achieves huge success in natural language processing and displays great potential in image processing.It is inherently permutation invariant for processing a sequence of points,making it well-suited for point cloud learning.To better capture local context within the point cloud,we enhance input embedding with the support of farthest point sampling and nearest neighbor search.Extensive experiments demonstrate that the PCT achieves the state-of-the-art performance on shape classification,part segmentation,semantic segmentation,and normal estimation tasks.展开更多
文摘A new theoretical approach to language has emerged in the past 10-15 years that allows linguistic observations about form-meaning pairings constructions — to be stated directly. Constructionist approaches aim to account for the full range of facts about language, without assuming that a particular subset of the data is part of a privileged 揷ore? Researchers argue that unusual constructions shed light on more general issues, and serve to illuminate what is required for a complete account of language.
文摘In this paper, we present reduction algorithms based on the principle of Skowron's discernibility matrix - the ordered attributes method. The completeness of the algorithms for Pawlak reduct and the uniqueness for a given order of the attributes are proved. Since a discernibility matrix requires the size of the memory of U2, U is a universe of objects, it would be impossible to apply these algorithms directly to a massive object set. In order to solve the problem, a so-called quasi-discernibility matrix and two reduction algorithms are proposed. Although the proposed algorithms are incomplete for Pawlak reduct, their opimal paradigms ensure the completeness as long as they satisfy some conditions. Finally we consider the problem on the reduction of distributive object sets.
基金the National Natural Science Foundation of China(Grant Nos.61751201 and 61672162)the Shanghai Municipal Science and Technology Major Project(Grant No.2018SHZDZX01)and ZJLab。
文摘Recently, the emergence of pre-trained models(PTMs) has brought natural language processing(NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language representation learning and its research progress. Then we systematically categorize existing PTMs based on a taxonomy from four different perspectives. Next,we describe how to adapt the knowledge of PTMs to downstream tasks. Finally, we outline some potential directions of PTMs for future research. This survey is purposed to be a hands-on guide for understanding, using, and developing PTMs for various NLP tasks.
文摘We present a method for solving partial differential equations using artificial neural networks and an adaptive collocation strategy.In this procedure,a coarse grid of training points is used at the initial training stages,while more points are added at later stages based on the value of the residual at a larger set of evaluation points.This method increases the robustness of the neural network approximation and can result in significant computational savings,particularly when the solution is non-smooth.Numerical results are presented for benchmark problems for scalar-valued PDEs,namely Poisson and Helmholtz equations,as well as for an inverse acoustics problem.
基金supported by the National Natural Science Foundation of China(Project Number 61521002)the Joint NSFC–DFG Research Program(Project Number 61761136018).
文摘The irregular domain and lack of ordering make it challenging to design deep neural networks for point cloud processing.This paper presents a novel framework named Point Cloud Transformer(PCT)for point cloud learning.PCT is based on Transformer,which achieves huge success in natural language processing and displays great potential in image processing.It is inherently permutation invariant for processing a sequence of points,making it well-suited for point cloud learning.To better capture local context within the point cloud,we enhance input embedding with the support of farthest point sampling and nearest neighbor search.Extensive experiments demonstrate that the PCT achieves the state-of-the-art performance on shape classification,part segmentation,semantic segmentation,and normal estimation tasks.