Keywords:Dimensionality Riduction, Linear Discriminant Analysis, Laplacian Eigenmaps, Graph Laplacian Features, kernel method
Dimensionality reduction is often requires as preprocessing in Analyzing high-dimensional data using machine learning,because it involves a curse of dimension.In unsupervised learning, Laplacian Eigenmaps (LE) is widely known as a preprocessing of Normalized Spectral Clustering using Graph Laplacian calculated from similarity matrix. In addition, Linear Discriminant Analysis (LDA) of supervised learning can be regarded as a dimension reduction that maximizes the degree of separation defined by the intra-variance and the inter-variance calculated from known label information.In this presentation, we show that LE and LDA can be unified by extending label information to probability of belong to cluster.We derive Kernel Graph Laplacian Features (KGLF) as a bridge of them, and clarify that the eigenvalue of the similarity matrix is the reciprocal of cluster size and the eigenvector is the probability of belong to cluster.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.