We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold i...We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approximation for the tangent space at each point, and those tangent spaces are then aligned to give the global coordinates of the data points with respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can be quite small in some cases. We illustrate our algorithm using curves and surfaces both in 2D/3D Euclidean spaces and higher dimensional Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements.展开更多
In the big data era, the data are generated from different sources or observed from different views. These data are referred to as multi-view data. Unleashing the power of knowledge in multi-view data is very importan...In the big data era, the data are generated from different sources or observed from different views. These data are referred to as multi-view data. Unleashing the power of knowledge in multi-view data is very important in big data mining and analysis. This calls for advanced techniques that consider the diversity of different views,while fusing these data. Multi-view Clustering(MvC) has attracted increasing attention in recent years by aiming to exploit complementary and consensus information across multiple views. This paper summarizes a large number of multi-view clustering algorithms, provides a taxonomy according to the mechanisms and principles involved, and classifies these algorithms into five categories, namely, co-training style algorithms, multi-kernel learning, multiview graph clustering, multi-view subspace clustering, and multi-task multi-view clustering. Therein, multi-view graph clustering is further categorized as graph-based, network-based, and spectral-based methods. Multi-view subspace clustering is further divided into subspace learning-based, and non-negative matrix factorization-based methods. This paper does not only introduce the mechanisms for each category of methods, but also gives a few examples for how these techniques are used. In addition, it lists some publically available multi-view datasets.Overall, this paper serves as an introductory text and survey for multi-view clustering.展开更多
The invariant subspace method is refined to present more unity and more diversity of exact solutions to evolution equations. The key idea is to take subspaces of solutions to linear ordinary differential equations as ...The invariant subspace method is refined to present more unity and more diversity of exact solutions to evolution equations. The key idea is to take subspaces of solutions to linear ordinary differential equations as invariant subspaces that evolution equations admit. A two-component nonlinear system of dissipative equations is analyzed to shed light oi1 the resulting theory, and two concrete examples are given to find invariant subspaces associated with 2nd-order and 3rd-order linear ordinary differentii equations and their corresponding exact solutions with generalized separated variables.展开更多
文摘We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approximation for the tangent space at each point, and those tangent spaces are then aligned to give the global coordinates of the data points with respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can be quite small in some cases. We illustrate our algorithm using curves and surfaces both in 2D/3D Euclidean spaces and higher dimensional Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements.
基金supported in part by the National Natural Science Foundation of China (No. 61572407)
文摘In the big data era, the data are generated from different sources or observed from different views. These data are referred to as multi-view data. Unleashing the power of knowledge in multi-view data is very important in big data mining and analysis. This calls for advanced techniques that consider the diversity of different views,while fusing these data. Multi-view Clustering(MvC) has attracted increasing attention in recent years by aiming to exploit complementary and consensus information across multiple views. This paper summarizes a large number of multi-view clustering algorithms, provides a taxonomy according to the mechanisms and principles involved, and classifies these algorithms into five categories, namely, co-training style algorithms, multi-kernel learning, multiview graph clustering, multi-view subspace clustering, and multi-task multi-view clustering. Therein, multi-view graph clustering is further categorized as graph-based, network-based, and spectral-based methods. Multi-view subspace clustering is further divided into subspace learning-based, and non-negative matrix factorization-based methods. This paper does not only introduce the mechanisms for each category of methods, but also gives a few examples for how these techniques are used. In addition, it lists some publically available multi-view datasets.Overall, this paper serves as an introductory text and survey for multi-view clustering.
基金supported by the State Administration of Foreign Experts Affairs of China,National Natural Science Foundation of China (Grant Nos. 10971136,10831003,61072147,11071159)Chunhui Plan of the Ministry of Education of China,Zhejiang Innovation Project (Grant No. T200905)the Natural Science Foundation of Shanghai and the Shanghai Leading Academic Discipline Project (Grant No.J50101)
文摘The invariant subspace method is refined to present more unity and more diversity of exact solutions to evolution equations. The key idea is to take subspaces of solutions to linear ordinary differential equations as invariant subspaces that evolution equations admit. A two-component nonlinear system of dissipative equations is analyzed to shed light oi1 the resulting theory, and two concrete examples are given to find invariant subspaces associated with 2nd-order and 3rd-order linear ordinary differentii equations and their corresponding exact solutions with generalized separated variables.